<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Results of the Ontology Alignment Evaluation Initiative 2008 ?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Caterina Caracciolo</string-name>
          <email>Caterina.Caracciolo@fao.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jérôme Euzenat</string-name>
          <email>jerome.euzenat@inria.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laura Hollink</string-name>
          <email>laurah@few.vu.nl</email>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ryutaro Ichise</string-name>
          <email>ichise@nii.ac.jp</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antoine Isaac</string-name>
          <email>aisaac@few.vu.nl</email>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Véronique Malaisé</string-name>
          <xref ref-type="aff" rid="aff7">7</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Meilicke</string-name>
          <email>christian@informatik.uni-mannheim.de</email>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan Pane</string-name>
          <email>pane@dit.unitn.it</email>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavel Shvaiko</string-name>
          <email>pavel.shvaiko@infotn.it</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Heiner Stuckenschmidt</string-name>
          <email>heiner@informatik.uni-mannheim.de</email>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ondrˇej Šváb-Zamazal</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vojteˇch Svátek</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>FAO</institution>
          ,
          <addr-line>Roma</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>INRIA &amp; LIG</institution>
          ,
          <addr-line>Montbonnot</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Institute of Informatics</institution>
          ,
          <addr-line>Tokyo</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>TasLab</institution>
          ,
          <addr-line>Informatica Trentina, Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Economics</institution>
          ,
          <addr-line>Prague</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>University of Mannheim</institution>
          ,
          <addr-line>Mannheim</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>University of Trento</institution>
          ,
          <addr-line>Povo, Trento</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff7">
          <label>7</label>
          <institution>Vrije Universiteit Amsterdam</institution>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Ontology matching consists of finding correspondences between ontology entities. OAEI campaigns aim at comparing ontology matching systems on precisely defined test sets. Test sets can use ontologies of different nature (from expressive OWL ontologies to simple directories) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI-2008 builds over previous campaigns by having 4 tracks with 8 test sets followed by 13 participants. Following the trend of previous years, more participants reach the forefront. The official results of the campaign are those published on the OAEI web site.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The Ontology Alignment Evaluation Initiative1 (OAEI) is a coordinated international
initiative that organizes the evaluation of the increasing number of ontology matching
systems [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The main goal of the Ontology Alignment Evaluation Initiative is to
compare systems and algorithms on the same basis and to allow anyone for drawing
conclusions about the best matching strategies. Our ambition is that from such evaluations,
? This paper improves on the “First results” initially published in the on-site proceedings of the
ISWC workshop on Ontology Matching (OM-2008). The only official results of the campaign,
however, are on the OAEI web site.
1 http://oaei.ontologymatching.org
tool developers can learn and improve their systems. The OAEI campaign provides the
evaluation of matching systems on consensus test cases.
      </p>
      <p>
        Two first events were organized in 2004: (i) the Information Interpretation and
Integration Conference (I3CON) held at the NIST Performance Metrics for Intelligent
Systems (PerMIS) workshop and (ii) the Ontology Alignment Contest held at the
Evaluation of Ontology-based Tools (EON) workshop of the annual International Semantic
Web Conference (ISWC) [18]. Then, unique OAEI campaigns occurred in 2005 at the
workshop on Integrating Ontologies held in conjunction with the International
Conference on Knowledge Capture (K-Cap) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], in 2006 at the first Ontology Matching
workshop collocated with ISWC [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and in 2007 at the second Ontology Matching
workshop collocated with ISWC+ASWC [8]. Finally, in 2008, OAEI results were
presented at the third Ontology Matching workshop collocated with ISWC, in Karlsruhe,
Germany2.
      </p>
      <p>We have continued previous years’ trend by having a large variety of test cases that
emphasize different aspects of ontology matching. We have kept particular modalities
of evaluation for some of these test cases, such as a consensus building workshop.</p>
      <p>This paper serves as an introduction to the evaluation campaign of 2008 and to the
results provided in the following papers. The remainder of the paper is organized as
follows. In Section 2 we present the overall testing methodology that has been used.
Sections 3-10 discuss in turn the settings and the results of each of the test cases.
Section 11 overviews lessons learned from the campaign. Finally, Section 12 outlines future
plans and Section 13 concludes.
2</p>
    </sec>
    <sec id="sec-2">
      <title>General methodology</title>
      <p>We first present the test cases proposed this year to OAEI participants. Then we
describe the three steps of the OAEI campaign and report on the general execution of the
campaign. In particular, we list participants and the tests they considered.
2.1</p>
      <sec id="sec-2-1">
        <title>Tracks and test cases</title>
        <p>This year’s campaign has consisted of four tracks gathering eight data sets and different
evaluation modalities.</p>
        <p>The benchmark track (§3): Like in previous campaigns, a systematic benchmark
series has been produced. The goal of this benchmark series is to identify the areas in
which each matching algorithm is strong and weak. The test is based on one
particular ontology dedicated to the very narrow domain of bibliography and a number
of alternative ontologies of the same domain for which alignments are provided.
The expressive ontologies track offers ontologies using OWL modeling capabiities:
Anatomy: (§4) The anatomy real world case is about matching the Adult Mouse
Anatomy (2744 classes) and the NCI Thesaurus (3304 classes) describing the
human anatomy.</p>
        <sec id="sec-2-1-1">
          <title>2 http://om2008.ontologymatching.org</title>
          <p>FAO (§5): The FAO test case is a real-life case aiming at matching OWL
ontologies developed by the Food and Agriculture Organization of the United Nations
(FAO) related to the fisheries domain.</p>
          <p>The directories and thesauri track proposed web directories, thesauri and generally
less expressive resources:
Directory (§6): The directory real world case consists of matching web sites
directories (like open directory or Yahoo’s). It is more than 4 thousand elementary
tests.</p>
          <p>Multilingual directories (§7): The mldirectory real world case consists of
matching web site directories (such as Google, Lycos and Yahoo’s) in different
languages, e.g., English and Japanese. Data sets are excerpts of directories that
contain approximately one thousand categories.</p>
          <p>Library (§8): Two SKOS thesauri about books have to be matched using relations
from the SKOS Mapping vocabulary. Samples of the results are evaluated by
domain experts. In addition, we run application dependent evaluation.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Very large crosslingual resources (§9): This real world test case requires match</title>
        <p>ing very large resources (vlcr) available on the web, viz. DBPedia,
WordNet and the Dutch audiovisual archive (GTAA), DBPedia is multilingual and
GTAA is in Dutch.</p>
        <p>The conference track and consensus workshop (§10): Participants were asked to
freely explore a collection of conference organization ontologies (the domain being
well understandable for every researcher). This effort was expected to materialize
in alignments as well as in interesting individual correspondences (“nuggets”),
aggregated statistical observations and/or implicit design patterns. Organizers of this
track offered diverse a priori and a posteriori evaluation of results. For a selected
sample of correspondences, consensus was sought at the workshop and the process
was tracked and recorded.
Ontologies to be matched and (where applicable) alignments have been provided in
advance during the period between May 19th and June 15th, 2008. This gave potential
participants the occasion to send observations, bug corrections, remarks and other test
cases to the organizers. The goal of this preparatory period is to ensure that the delivered
tests make sense to the participants. The final test base was released on July 1st. The
data sets did not evolve after this period.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Execution phase</title>
        <p>During the execution phase, participants used their systems to automatically match the
ontologies from the test cases. Participants have been asked to use one algorithm and the
same set of parameters for all tests in all tracks. It is fair to select the set of parameters
that provide the best results (for the tests where results are known). Beside parameters,
the input of the algorithms must be the two ontologies to be matched and any general
purpose resource available to everyone, i.e., no resource especially designed for the
test. In particular, the participants should not use the data (ontologies and reference
alignments) from other test sets to help their algorithms.</p>
        <p>
          In most cases, ontologies are described in OWL-DL and serialized in the RDF/XML
format. The expected alignments are provided in the Alignment format expressed in
RDF/XML [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. Participants also provided the papers that are published hereafter and a
link to their systems and their configuration parameters.
2.4
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>Evaluation phase</title>
        <p>The organizers have evaluated the alignments provided by the participants and returned
comparisons on these results.</p>
        <p>In order to ensure that it is possible to process automatically the provided results, the
participants have been requested to provide (preliminary) results by September 1st. In
the case of blind tests only the organizers did the evaluation with regard to the withheld
reference alignments.</p>
        <p>The standard evaluation measures are precision and recall computed against the
reference alignments. For the matter of aggregation of the measures we use weighted
harmonic means (weights being the size of the true positives). This clearly helps in the
case of empty alignments. Another technique that has been used is the computation of
precision/recall graphs so it was advised that participants provide their results with a
weight to each correspondence they found. New measures addressing some limitations
of precision and recall have also been used for testing purposes as well as measures
compensating for the lack of complete reference alignments.</p>
        <p>In addition, the Library test case featured an application-specific evaluation and a
consensus workshop has been held for evaluating particular correspondences.</p>
        <p>Anchor-Flood</p>
        <p>AROMA
ASMOV</p>
        <p>CIDER</p>
        <p>DSSim
GeRoMe</p>
        <p>Lily
MapPSO
RiMOM</p>
        <p>SAMBO
SAMBOdtf</p>
        <p>SPIDER
TaxoMap
Total=13
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
13</p>
        <p>y
anatom
p
p
p
p
p
p
p
p
p
9
p
p
p
p
p
p
p
p
8
p
p
p
p
p
p
p
7
2.5</p>
      </sec>
      <sec id="sec-2-5">
        <title>Comments on the execution</title>
        <p>This year, for the first time, we had less participants than in the previous year (though
still more than in 2006): 4 in 2004, 7 in 2005, 10 in 2006, 18 in 2007, and 13 in 2008.
However, participants were able to enter nearly as many individual tasks as last year:
48 against 50.</p>
        <p>We have had not enough time to systematically validate the results which had been
provided by the participants, but we run a few systems and we scrutinized some of the
results.</p>
        <p>We summarize the list of participants in Table 2. Similar to previous years not all
participants provided results for all tests. They usually did those which are easier to
run, such as benchmark, directory and conference. The variety of tests and the short
time given to provide results have certainly prevented participants from considering
more tests.</p>
        <p>There is an even distribution of systems on tests (unlike last year when there were
two groups of systems depending on the size of the ontologies). This years’ participation
seems to be weakly correlated with the fact that a test has been offered before.</p>
        <p>ark
Software confidence benchm
fao directory
ldirectory
m
library
vlcr conference</p>
        <p>This year we can still regret to have not enough time for performing tests and
evaluations. This may explain why even participants with good results last year did not
participate this year. The summary of the results track by track is provided in the
following seven sections.
p
p
p
p
4
p
p
p
3
p
p
p
p
1
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Benchmark</title>
      <p>The goal of the benchmark tests is to provide a stable and detailed picture of each
algorithm. For that purpose, the algorithms are run on systematically generated test
cases.
3.1</p>
      <sec id="sec-3-1">
        <title>Test set</title>
        <p>The domain of this first test is Bibliographic references. It is, of course, based on a
subjective view of what must be a bibliographic ontology. There can be many different
classifications of publications, for example, based on area and quality. The one
chosen here is common among scholars and is based on publication categories; as many
ontologies (tests #301-304), it is reminiscent to BibTeX.</p>
        <p>The systematic benchmark test set is built around one reference ontology and
many variations of it. The ontologies are described in OWL-DL and serialized in the
RDF/XML format. The reference ontology is that of test #101. It contains 33 named
classes, 24 object properties, 40 data properties, 56 named individuals and 20
anonymous individuals. Participants have to match this reference ontology with the variations.
Variations are focused on the characterization of the behavior of the tools rather than
having them compete on real-life problems. They are organized in three groups:
Simple tests (1xx) such as comparing the reference ontology with itself, with another
irrelevant ontology (the wine ontology used in the OWL primer) or the same
ontology in its restriction to OWL-Lite;
Systematic tests (2xx) obtained by discarding features from some reference ontology.</p>
        <p>It aims at evaluating how an algorithm behaves when a particular type of
information is lacking. The considered features were:
– Name of entities that can be replaced by random strings, synonyms, name with
different conventions, strings in another language than English;
– Comments that can be suppressed or translated in another language;
– Specialization hierarchy that can be suppressed, expanded or flattened;
– Instances that can be suppressed;
– Properties that can be suppressed or having the restrictions on classes
discarded;
– Classes that can be expanded, i.e., replaced by several classes or flattened.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Four real-life ontologies of bibliographic references (3xx) found on the web and left</title>
        <p>mostly untouched (there were added xmlns and xml:base attributes).</p>
        <p>Since the goal of these tests is to offer some kind of permanent benchmarks to be
used by many, the test is an extension of the 2004 EON Ontology Alignment Contest,
whose test numbering it (almost) fully preserves.</p>
        <p>After remarks of last year we made two changes on the tests this year:
– tests #249 and 253 still had instances in the ontologies, these have been suppressed
this year. Hence the test is more difficult than previous years;
– tests which scrambled all labels within the ontology (#201-202, 248-254 and
257262), have been complemented by tests which respectively only scramble 20%,
40%, 60% and 80% of the labels. Globally, this makes the tests easier to solve.</p>
        <p>The kind of expected alignments is still limited: they only match named classes and
properties, they mostly use the "=" relation with confidence of 1. Full description of
these tests can be found on the OAEI web site.
3.2</p>
      </sec>
      <sec id="sec-3-3">
        <title>Results</title>
        <p>All the 13 systems participated in the benchmark track of this year’s campaign. Table 3
provides the consolidated results, by groups of tests. We display the results of
participants as well as those given by some simple edit distance algorithm on labels (edna).
The computed values are real precision and recall and not an average of precision and
recall. The full results are on the OAEI web site.</p>
        <p>Results in Table 3 show already that the three systems, which last year were
leading, are still relatively ahead (ASMOV, Lily and RiMOM) with three close followers
(AROMA, DSSim, and Anchor-Flood replacing Falcon, Prior+ and OLA2 last year).
No system had strictly lower performance than edna. Each algorithm has its best score
with the 1xx test series. There is no particular order between the two other series.</p>
        <p>This year again, the apparently best algorithms provided their results with
confidence measures. It is thus possible to draw precision/recall graphs in order to compare
them. We provide in Figure 1 the precision and recall graphs of this year. They are only
relevant for the results of participants who provided confidence measures different from
1 or 0 (see Table 2). This graph has been drawn with only technical adaptation of the
technique used in TREC. Moreover, due to lack of time, these graphs have been
computed by averaging the graphs of each of the tests (instead to pure precision and recall).
They do not feature the curves of previous years since the test sets have been changed.</p>
        <p>These results and those displayed in Figure 2 single out the same group of systems,
ASMOV, Lily, and RiMOM which seem to perform these tests at the highest level of
quality. So this confirms the leadership that we observed on raw results.</p>
        <p>Like the two previous years, there is a gap between these systems and their
followers. The gap between these systems and the next ones (AROMA, DSSim, and
AnchorFlood) has reformed. It was filled last year by Falcon, OLA2, and Prior+ which did not
participate this year.</p>
        <p>We have also compared the results of this year’s systems with the results of the
previous years on the basis of 2004 tests, see Table 4. The two best systems on this basis
are the same: ASMOV and Lily. Their results are very comparable but never identical
to the results provided in the previous years by RiMOM (2006) and Falcon (2005).
lon rco abT
y r
s se le
(</p>
        <p>p
u on .3
p
p d M
r
2 r
0% rep ed
07 sca
.59 .601 .506 .906 070
0
0 0 0 1
.5 .8 .5 .0
9 1 6 0
0 0 0 0
.8 .1 .9 .9
1 5 7 9</p>
        <p>refalign
aroma
DSSim
MapPSO
SAMBOdtf
recall
edna
ASMOV
GeRoMe
RiMOM
SPIDER
aflood
CIDER</p>
        <p>Lily
SAMBO
TaxoMap</p>
        <p>ASMOV
aroma</p>
        <p>RiMOM
aflood
DSSim</p>
        <p>CIDER
SPIDER SAMBO SAMBOdtf</p>
        <p>GeRoMe</p>
        <p>MapPSO
edna</p>
        <p>TaxoMap
precision
1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00
0.98 0.97 1.00 0.98 0.99 0.99 1.00 0.98 0.99 0.98 0.99 0.98
0.93 0.83 0.83 0.82 0.85 0.82 0.81 0.80 0.81 0.77 0.87 0.81
0.97 0.96 0.97 0.96 0.97 0.97 0.97 0.96 0.97 0.96 0.98 0.96</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Anatomy</title>
      <p>The focus of the anatomy track is to confront existing matching technology with real
world ontologies. Currently, we find such real world cases primarily in the biomedical
domain, where a significant number of ontologies have been built covering different
aspects of medical research.3 Manually generating alignments between these ontologies
requires an enormous effort by highly specialized domain experts. Supporting these
experts by automatically providing correspondence proposals is challenging, due to the
complexity and the specialized vocabulary of the domain.
The ontologies of the anatomy track are the NCI Thesaurus describing the human
anatomy, published by the National Cancer Institute (NCI)4, and the Adult Mouse
Anatomical Dictionary5, which has been developed as part of the Mouse Gene
Expression Database project. Both resources are part of the Open Biomedical Ontologies
(OBO). A more detailed description of the characteristics of the data set has already
been given in the context of OAEI 2007 [8].</p>
      <p>
        Due to the harmonization of the ontologies applied in the process of generating
a reference alignment, a high number of rather trivial correspondences can be found
by simple string comparison techniques. At the same time, we have a good share of
non-trivial correspondences that require a careful analysis and sometimes also medical
background knowledge. The construction of the reference alignment has been described
in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. To better understand the occurrence of non-trivial correspondences in alignment
results, we implemented a straightforward matching tool that compares normalized
concept labels. This trivial matcher generates for all pairs of concepts hC; Di a
correspondence if and only if the normalized label of C is identical to the normalized label of
D. In general we expect an alignment generated by this approach to be highly precise
while recall will be relatively low. With respect to our matching task we measured
approximately 98% precision and 61% recall. Notice that the value for recall is relatively
high, which is partially caused by the harmonization process mentioned above. In 2007
we assumed that most matching systems would easily find the trivial correspondences.
To our suprise this assumption has not been verified. Therefore, we applied again the
additional measure referred to as recall +. recall + measures how many non trivial
correct correspondences can be found in an alignment M . Given reference alignment R
and alignment S generated by the naive string equality matching, recall + is defined as
recall + = j(R \ M ) Sj = jR Sj.
      </p>
      <p>We divided the task of automatically generating an alignment into four subtasks.
Task #1 is obligatory for participants of the anatomy track, while task #2, #3 and #4 are
optional tasks. Compared to 2007 we also introduced #4 as challenging fourth subtask.
For task #1 the matching system has to be applied with standard settings to obtain a
result that is as good as possible with respect to the expected F-measure. In particular,
3 A large collection can be found at http://www.obofoundry.org/.
4 http://www.cancer.gov/cancerinfo/terminologyresources/
5 http://www.informatics.jax.org/searches/AMA_form.shtml
we are interested in how far matching systems improved their results compared to last
years evaluation. For task #2 an alignment with increased precision has to be found.
Contrary to this, in task #3 an alignment with increased recall has to be generated. We
believe that systems configurable with respect to these requirements will be much more
useful in concrete scenarios compared to static systems. While we expect most systems
to solve the first three tasks, we expect only few systems to solve task #4. For this task
a part of the reference alignment is available as additional input. In task #4 we tried to
simulate the following scenario. Suppose that a group of domain experts already
created an incomplete reference alignment by manually validating a set of automatically
generated correspondences. As a result a partial reference alignment, in the following
referred to as Rp, is available. Given both ontologies as well as Rp, a matching system
should be able to exploit the additional information encoded in Rp. We constructed Rp
as the union of the correct trivial correspondences and a small set of 54 non trivial
correspondences. Thus Rp consists of 988 correspondences, while the complete reference
alignment R contains 1523 correspondences.
4.2</p>
      <sec id="sec-4-1">
        <title>Results</title>
        <p>In total, nine systems participated in the anatomy task (in 2007 there were 11
participants). These systems can be divided into a group of systems using biomedical
background knowledge and a group of systems that do not exploit domain specific
background knowledge. SAMBO and ASMOV belong to the first group, while the other
systems belong to the second group. Both SAMBO and ASMOV make use of UMLS,
but differ in the way they exploit this additional knowledge. Table 5 gives an overview
of participating systems. In 2007 we observed that systems of the first group have a
significant advantage of finding non trivial correspondences, in particular the best three
systems (AOAS, SAMBO, and ASMOV) made use of background knowledge. We will
later see whether this assumption could be verified with respect to 2008 submissions.</p>
        <p>Compliance measures for task #1 Table 5 lists the results of the participants in
descending order with respect to the achieved F-measure. In the first row we find the
SAMBO system followed by its extension SAMBOdtf. SAMBO has achieved slightly
better results for both precision and recall in 2008 compared to 2007. SAMBO now
nearly reaches the F-measure 0:868 which AOAS achieved 2007. This is a notable
result, since SAMBO is originally designed to generate alignment suggestions that are
afterwards presented to a human evaluator in an interactive fashion. While SAMBO
and SAMBOdtf make extensive use of biomedical background knowledge, the RiMOM
matching system is mainly based on computing label edit-distances combined with
similarity propagation strategies. Due to a major improvement of the RiMOM results,
RiMOM is now one of the top matching systems for the anatomy track even though it
does not make use of any specific background knowledge. Notice also that RiMOM
solves the matching task in a very efficient way. Nearly all matching systems
participating 2007 improved their results, while ASMOV and TaxoMap obtained slightly worse
results. Further considerations have to clarify the reasons for this decline.</p>
        <p>Task #2 and #3 As explained above these subtasks show in how far matching
systems can be configured towards a trade-off between precision and recall. To our surprise
only four participants submitted results for task #2 and #3 showing that they were able to
System
SAMBO
SAMBOdtf
RiMOM
aflood
Label Eq.</p>
        <p>Lily
ASMOV
AROMA
DSSim
TaxoMap
yes 0.869 0.845 0.836 0.797 0.586 0.601 0.852 0.821
yes 0.831 0.833 0.579 0.832
no 0.929 0.377 0.735 0.668 0.350 0.404 0.821 0.482
no 0.874 0.682 0.275 0.766
no 0.981 0.981 0.613 0.613 0.000 0.000 0.755 0.755
3h 20min no 0.796 0.481 0.693 0.567 0.470 0.387 0.741 0.520
3h 50min yes 0.787 0.802 0.652 0.711 0.246 0.280 0.713 0.754
3min 50s no 0.803 0.560 0.302 0.660
17min no 0.616 0.208 0.624 0.189 0.170 0.070 0.620 0.198
25min no 0.460 0.586 0.764 0.700 0.470 0.234 0.574 0.638
adapt their system for different scenarios of application. These systems were RiMOM,
Lily, ASMOV, and DSSim. A more detailed discussion of their results with respect to
task #2 and #3 can be found on the OAEI anatomy track webpage6.</p>
        <p>Task #4 Four systems participated in task #4. These systems were SAMBO and
SAMBOdtf, RiMOM, and ASMOV. In the following we refer to an alignment
generated for task #1 resp. #4 as M1 resp. M4. Notice first of all that a direct comparison
between M1 and M4 is not appropriate to measure the improvement that results from
exploiting Rp. We thus have to compare M1 nRp resp. M4 nRp with the unknown subset
of the reference alignment Ru = R n Rp. The differences between M1 (partial reference
alignment not available) and M4 (partial reference alignment given) are presented in
Table 6. All participants slightly increased the overall quality of the generated alignments
with respect to the unknown part of the reference alignment. SAMBOdtf and ASMOV
exploited the partial reference alignment in the most effective way. The measured
im6 http://webrum.uni-mannheim.de/math/lski/anatomy08/
provement seems to be only minor at first sight, but notice that all of the correspodences
in Ru are non trivial due to our choice of the partial reference alignment. The
improvement is primarily based on generating an alignment with increased precision. ASMOV
for example increases its precision from 0:339 to 0:402. Only SAMBOdtf also
profits from the partial reference alignment by a slightly increased recall. Obviously, the
partial reference alignment is mainly used in the context of a strategy which filters out
incorrect correspondences.</p>
        <p>
          Runtime Even though the submitted alignments have been generated on different
machines, we believe that the runtimes provided by participants are nevertheless useful
and provide a basis for an approximate comparison. For the two fastest systems, namely
aflood and AROMA, runtimes have been measured by the track organizers on the same
machine (Pentium D 3.4GHz, 2GB RAM) additionally. Compared to last years
competition we observe that systems with a high runtime managed to decrease the runtime
of their system significantly, e.g. Lily and ASMOV. Amongst all systems AROMA and
aflood, both participating for the first time, performed best with respect to runtime
efficiency. In particular, the aflood system achieves results of high quality in a very efficient
way.
In last years evaluation, we concluded that the use of domain related background
knowledge is a crucial point in matching biomedical ontologies. This observation is supported
by the claims made by other researchers [
          <xref ref-type="bibr" rid="ref1">1, 15</xref>
          ]. The current results partially support
this claim, in particular the good results of the SAMBO system. Nevertheless, the
results of RiMOM and Lily indicate that matching systems are able to detect non trivial
correspondences even though they do not rely on background knowledge. To support
this claim we computed the union of the alignments generated by RiMOM and Lily.
As a result we measured that 61% of all non trivial correspondences are included in
the resulting alignment. Thus, there seems to be a significant potential of exploiting
knowledge encoded in the ontologies. A combination of both approaches might result
in a hybrid matching strategy that uses both background knowledge and the internal
knowledge to its full extent.
5
        </p>
        <p>FAO
The Food and Agriculture Organization of the United Nations (FAO) collects large
amounts of data about all areas related to food production and consumption, including
statistical data, e.g., time series, and textual documents, e.g., scientific papers, white
papers, project reports. For the effective storage and retrieval of these data sets,
controlled vocabularies of various types (in particular, thesuri and metadata hierarchies)
have extensively been used. Currently, this data is being converted into ontologies for
the purpose of enabling connection between data sets otherwise isolated from one
another. The FAO test case aims at exploring the possibilities of establishing alignments
between some of the ontologies traditionally available. We chose a representative subset
of them, that we describe below.
The FAO task involves the three following ontologies:
– AGROVOC7 is a thesaurus about all matters of interest for FAO, it has been
translated into an OWL ontology as a hierarchy of classes, where each class corresponds
to an entry in the thesaurus. For technical reasons, each class is associated with an
instance with the same name. Given the size and the coverage of AGROVOC, we
selected only the branches of it that have some overlap with the other considered
ontologies. We then selected the fragments of AGROVOC about “organisms,”
“vehicles” (including vessels), and “fishing gears”.
– ASFA8 is a thesaurus specifically dedicated to aquatic sciences and fisheries. In its
OWL translation, descriptors and non-descriptors are modeled as classes, so the
ontology does not contain any instance. The tree structure of ASFA is relatively flat,
with most concepts not having subclasses, and a maximum depth of 4 levels.
Concepts have associated annotations, each of which containing the English definition
of the term.
– Two specific fisheries ontologies in OWL9, that model coding systems for
commodities and species, used as metadata for statistical time series. These ontologies
have a fairly simple class structure, e.g., the species ontologies has one top class
and four subclasses, but a large number of instances. They contain instances in up
to 3 languages (English, French and Spanish).</p>
        <p>Based on these ontologies, participats were asked to establish alignments between:
1. AGROVOC and ASFA (from now on called agrasfa),
2. AGROVOC and fisheries ontology about biological species (called agrobio),
3. the two ontologies about biological species and commodities (called fishbio).
Given the structure of the ontologies described above, the expectation about the
resulting alignments was that the alignment between AGROVOC and ASFA (agrasfa)
would be at the class level, since both model entries of the thesaurus as classes.
Analogously, both the alignment between AGROVOC and biological species (agrobio), and
the alignment between the two fisheries ontologies (fishbio) is expected to be at the
instance level. However, no strict instructions were given to participants about the exact
type of alignment expected, as one of the goals of the experiment was to find how
automatic systems can deal with a real-life situation, when the ontologies given are designed
according to different models and have little or no documentation.</p>
        <p>The equivalence correspondences requested for the agrasfa and agrobio subtracks
are plausible, given the similar nature of the two resources (thesauri used for human
indexing, with some overlap in the domain covered). In the case of the fishbio subtrack
this is not true, as the two ontologies involved are about two domains that are disjoint,
although related, i.e., commodities and fish species. The relation between the two
domains is that a specific species (or more than one) are the primary source of the goods
7 http://www.fao.org/aims/ag_intro.htm
8 http://www.fao.org/fishery/asfa/8
9 http://www.fao.org/aims/neon.jsp
sold, i.e. the commodity. Their relation then is not an equivalence relation but can rather
be seen, in OWL terminology, as an object property with domain and range sitting in
different ontologies. The intent of the subtrack fishbio is then to explore the
possibility of using the machinery available for inferring equivalence correspondence to non
conventional cases.
All participants but one, Aroma, returned equivalence correspondence only. The
nonequivalence correspondences of Aroma were ignored.</p>
        <p>A reference alignment was obtained by randomly selecting a specific number of
correspondences from each system and then pooling together. This provided a sample
alignment A0.</p>
        <p>This sample alignment was evaluated by FAO experts for correctness. This provided
a partial reference alignment R0. We had two assessors: one specialized in thesauri
and daily working with AGROVOC (assessing the alignments of the track agrasfa) and
one specialized in fisheries data (assessing subtracks agrobio and fishbio). Given the
differences between the ontologies, some transformations had to be made in order to
present data to the assessors in a user-friendly manner. For example, in the case of
AGROVOC, evaluators were given the English labels together with all available “used
for” terms (according to the thesauri terminology familiar to the assessor).
dataset
agrasfa
agrobio
fishbio
TOTAL
retrieved (A )
evaluated (A0)
correct (R0)
(A0=A )
(R0=A0)</p>
        <p>P 0(A; R0) = P (A \ A0; R0) = jA \ R0j=jA \ A0j
The same was considered for recall which was computed with respect to the total
number of correct correspondences per subtrack, as assessed by the human assessors. Hence,</p>
        <p>R0(A; R0) = R(A \ A0; R0) = jA \ R0j=jR0j
Recall is expected to be higher than actual recall because it is based only on
correspondences that at least one system returned, leaving aside those that no system were able
to return.</p>
        <p>We call these two measures relative precision and recall because they are relative to
the sample that has been extracted.
Table 8 summarizes the precision and (relative) recall values of all systems, by
subtracks. The third column reports the total number of correspondences returned by each
system per subtrack. All non-equivalence correspondences were discarded, but this only
happened for one systems (Aroma). The fourth column reports the number of
alignments from the system that were evaluated, while the fifth column reports the number
of correct alignments as judged by the assessors. Finally, the sixth and seventh columns
report the values of relative precision and recall computed as described above.</p>
        <p>System</p>
        <p>Aroma
ASMOV</p>
        <p>DSSim</p>
        <p>Lily
MapPSO
RiMOM</p>
        <p>SAMBO
SAMBOdtf
subtrack
agrasfa
agrobio
fishbio
agrafsa
agrobio
fishbio
agrasfa
agrobio
fishbio
agrasfa
agrobio
fishbio
agrasfa
agrobio
fishbio
agrasfa
agrasfa</p>
        <p>One system (MapPSO) returned alignments of properties, which were discarded
and therefore no evaluation is provided in the table. The results of ASMOV were also
not evaluated because too few to be considered. Finally, the evaluation of Aroma is
incomplete due to the non equivalence correspondence returned, that were discarded
before pooling the results together to create the reference alingment.
The sampling method that has been used is certainly not perfect. In particular, it did
not allow to evaluate two systems which returned few results (ASMOV and MapPSO).
However, the results returned by these system were not likely to provide good recall.</p>
        <p>Moreover, the very concise instructions and the particular character of the test sets,
clearly puzzled participants and their systems. As a consequence, the results may not
be as good as if the systems were applied to polished tests with easily comparable data
sets. This provides a honest insight of what these systems would do when confronted
with these ontologies on the web. In that respects, the results are not bad.</p>
        <p>From DSSim and RiMOM results, it seems that fishbio is the most difficult task
in terms of precision and agrasfa the most difficult in terms of recall (for most of the
systems). The fact that only two systems returned usable results for agrobio and
fishbio makes comparison of systems very difficult at this stage. However, it seems that
RiMOM is the one that provided the best results. RiMOM is especially interesting in
this real-life case, as it performed well both when an alignment between classes and an
alignment between instances is appropriate. Given the fact that in real-life situations it
is rather common to have ontologies with a relatively simple class structure and a very
large population of instances, this is encouraging.
6</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Directory</title>
      <p>The directory test case aims at providing a challenging task for ontology matchers in
the domain of large directories.</p>
      <p>The data set exploited in the directory matching task was constructed from Google,
Yahoo and Looksmart web directories following the methodology described in [9].
The data set is presented as taxonomies where the nodes of the web directories are
modeled as classes and classification relation connecting the nodes is modeled as
rdfs:subClassOf relation.</p>
      <p>The key idea of the data set construction methodology is to significantly reduce the
search space for human annotators. Instead of considering the full matching task which
is very large (Google and Yahoo directories have up to 3 105 nodes each: this means
that the human annotators need to consider up to (3 105)2 = 9 1010 correspondences),
it uses semi automatic pruning techniques in order to significantly reduce the search
space. For example, for the data set described in [9], human annotators consider only
2265 correspondences instead of the full matching problem.</p>
      <p>The specific characteristics of the data set are:
– More than 4.500 node matching tasks, where each node matching task is composed
from the paths to root of the nodes in the web directories.
– Reference correspondences for all the matching tasks.
– Simple relationships, in particular, web directories contain only one type of
relationships, which is the so-called classification relation.
– Vague terminology and modeling principles, thus, the matching tasks incorporate
the typical real world modeling and terminological errors.
In OAEI-2008, 7 out of 13 matching systems participated on the web directories test
case, while in OAEI-2007, 9 out of 18, in OAEI-2006, 7 out of 10, and in OAEI-2005,
7 out of 7 did it.</p>
      <p>
        Precision, recall and F-measure results of the systems are shown in Figure 3. These
indicators have been computed following the TaxMe2 [9] methodology, with the help
of Alignment API [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], version 3.4.
      </p>
      <p>We can observe from Table 9, that all the systems that participated in the directory
track in 2007 and 2008 (ASMOV, DSSim, Lily and RiMOM), have increased their
precision values. Considering recall, we can see that in general the systems that had
participated in 2007 and 2008 directory tracks, have decreased their values, the only
system that increased its recall values is DSSim. In fact, DSSim is the system with the
highest F-measure value in 2008.</p>
      <p>Table 9 shows that in total 21 matching systems have participated during the 4
years (2005 - 2008) of the OAEI campaign in the directory track. No single system
has participated in all campaigns involving the web directory dataset (2005 - 2008). A
total of 14 systems have participated only one time in the evaluation, 5 systems have
participated 2 times, and only 2 systems have participated 3 times. The systems that
have participated in 3 evaluations are Falcon (2005, 2006 and 2007) and RiMoM (2006,
2007, 2008), the former with a constant increase in the quality of the results, the later
with a constant increase in precision, but in the last evaluation (2008) recall dropped
significantly from 71% in 2007, to 17% in 2008.</p>
      <sec id="sec-5-1">
        <title>System</title>
        <p>Year !
ASMOV
automs
CIDER
CMS</p>
        <p>COMA
ctxMatch2</p>
        <p>DSSim
Dublin20
Falcon
FOAM
hmatch</p>
        <p>Lily
MapPSO</p>
        <p>OCM
OLA</p>
        <p>OMAP
OntoDNA</p>
        <p>Prior
RiMOM
TaxoMap
X-SOM
Average
#
2005</p>
        <p>As can be seen in Figure 4 and Table 9, there is an increase in the average precision
for the directory track of 2008, along with a decrease in the average recall compared to
2007. Notice that in 2005 the data set allowed only the estimation of recall, therefore
Figure 4 and Table 9 do not contain values of precision and F-measure for 2005.</p>
        <p>A comparison of the results in 2006, 2007 and 2008 for the top-3 systems of each
year based on the highest values of the F-measure indicator is shown in Figure 5. The
key observation here is that unfortunately the top-3 systems of 2007 did not participate
in the directory task this year, therefore, the top-3 systems for 2008 is a new set of
systems (Lily, CIDER and DSSim). From these 3 systems, CIDER is a newcomer, but
Lily and DSSim had also participated in the directory track of 2007, when they did not
manage to enter into the top-3 list.
188
1181
268
141
372
2150</p>
        <p>Lily
265
183
10
38
60
556
also seen in the movie domain. In contrast, MapPSO has a very different tendency.
Although the system found 556 alignments in total, only one correspondence was found
by the other systems.</p>
        <p>Auto</p>
        <p>Movie
Outdoor</p>
        <p>Photo
Software</p>
        <p>D</p>
        <p>L</p>
        <p>M</p>
        <p>R</p>
        <p>We also created a component bar chart (Figure 10) for clarifying the sharing of
retrieved correspondences. In the automobile and movie domains, 80% of the
correspondences are found by only one system, and most of the other 20% are found by both
Lily and RiMOM. From this graph, we can see that Lily has the same bias as RiMOM,
but the system still found many correspondences that the other systems did not find.
For the remaining domains, outdoor, photo and software, the correspondences found by
only one system reached almost 100%.</p>
        <p>Unfortunately, the results of other alignment tasks such as English-Japanese
alignments (ontology 1-3, ontology 1-4, ontology 2-3, and ontology 2-4), Japanese-Japanese
alignments (ontology 3-4) were only submitted by RiMOM. The number of alignments
by RiMOM are shown in Table 12.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Library</title>
      <p>This test case deals with two large Dutch thesauri. The National Library of the
Netherlands (KB) maintains two large collections of books: the Scientific Collection and the
Deposit collection, containing respectively 1.4 and 1 million books. Each collection is
annotated – indexed – using its own controlled vocabulary. The former is described
using the GTT thesaurus, a huge vocabulary containing 35,194 general concepts, ranging
from “Wolkenkrabbers” (Sky-scrapers) to “Verzorging” (Care). The latter is indexed
against the Brinkman thesaurus, which contains a large set of headings (5,221) for
describing the overall subjects of books. Both thesauri have similar coverage (2,895
concepts actually have exactly the same label) but differ in granularity.</p>
      <p>Each concept has exactly one preferred label, plus synonyms, extra hidden labels or
scope notes. The language of both thesauri is Dutch,10 which makes this track ideal for
testing alignment in a non-English situation. Concepts are also provided with structural
information, in the form of broader and related links. However, GTT (resp. Brinkman)
contains only 15,746 (resp 4,572) hierarchical broader links and 6,980 (resp. 1,855)
associative related links. The thesauri’s structural information is thus very poor.</p>
      <p>For the purpose of the OAEI campaign, the two thesauri were made available in
SKOS format. OWL versions were also provided, according to the – lossy – conversion
rules detailed on the web site11.</p>
      <p>In addition, we have provided participants with book descriptions. At KB, almost
250000 books belong both to KB Scientific and Deposit collections, and are
therefore already indexed against both GTT and Brinkman. Last year, we have used these
books as a reference for evaluation. However, these books can also be a precious hint
for obtaining correspondences. Indeed one of last year’s participant had exploited
cooccurrence of concepts, though on a collection obtained from another library. This year,
we split the 250000 books in two sets: two third of them are provided to participants for
alignment computation, and one third is kept as a test set to be used as a reference for
evaluation.
Three systems provided final results: DSSim (2,930 exactMatch correspondences),
Lily (2,797 exactMatch correspondences) and TaxoMap (1,872 exactMatch
correspondences, 274 broadMatch, 1,031 narrowMatch and 40 relatedMatch
correspondences).</p>
      <p>We have followed the scenario-oriented approach followed for 2007 library track,
as explained in [12].</p>
      <sec id="sec-6-1">
        <title>Evaluation in a thesaurus merging scenario. The first scenario is thesaurus merging,</title>
        <p>where an alignment is used to build a new, unified thesaurus from GTT and Brinkman
10 A quite substantial part of GTT concepts (around 60%) also have English labels.
11 http://oaei.ontologymatching.org/2008/skos2owl.html
thesauri. Evaluation in such a context requires assessing the validity of each individual
correspondence, as in “standard” alignment evaluation.</p>
        <p>As last year, there was no reference alignment available. We opted for evaluating
precision using a reference alignment based on a lexical procedure. This makes use
of direct comparison between labels, but also exploits a Dutch morphology database
that allows to recognize variants of a word, e.g., singular and plural. 3.659 reliable
equivalence links are obtained this way. We also measured coverage, which we define
as the proportion of all good correspondences found by an alignment divided by the
total number of good correspondences produced by all participants and those in the
reference – this is similar to the pooling approach that is used in major Information
Retrieval evaluations, like TREC.</p>
        <p>For manual evaluation, the set of all equivalence correspondences12 was partitioned
into parts unique to each combination of participant alignments, and each part was
sampled. A total of 403 correspondences were assessed by one Dutch native expert.</p>
        <p>From these assessments, precision and pooled recall were calculated with their 95%
confidence intervals, taking into account sampling size. The results are shown in
Table 13, which identifies DSSim as performing better than both other participants.</p>
        <p>DSSim has performed better than last year. This result stems probably from DSSim
now proposing almost only exact lexical matches of SKOS labels, as opposed to last
year.</p>
        <p>For the sake of completeness, we also evaluated the precision of the TaxoMap
correspondences that are not of type exactMatch. We categorized them according to the
strength that TaxoMap gave them (0.5 or 1). 20% ( 11%) of the correspondences with
strength 1 are correct. The figure rises to 25.1% ( 8:3%) when considering all
nonexactMatch correspondences, which hints at the strength not being very informative.</p>
      </sec>
      <sec id="sec-6-2">
        <title>Evaluation in an annotation translation scenario. The second usage scenario is</title>
        <p>based on an annotation translation process supporting the re-indexing of GTT-indexed
books with Brinkman concepts [12].</p>
        <p>This evaluation scenario interprets the correspondences provided by the
different participants as rules to translate existing GTT book annotations into equivalent
Brinkman annotations. Based on the quality of the results for books we know the correct
annotations of, we can assess the quality of the initial correspondences.
12 We did not proceed with manual evaluation of the broader, narrower and related links at once,
as only one contestant provided such links.</p>
        <p>Evaluation settings and measures. The simple concept-to-concept
correspondences sent by participants were transformed into more complex mapping rules that
associate one GTT concept and a set of Brinkman concepts – some GTT concepts are
indeed involved in several mapping statements. Considering exactMatch only, this
gives 2,930 rules for DSSim, 2,797 rules for Lily and 1,851 rules for TaxoMap. In
addition, TaxoMap produces resp. 229, 897 and 39 rules considering broadMatch,
narrowMatch and relatedMatch.</p>
        <p>The set of GTT concepts attached to each book is then used to decide whether these
rules are fired for this book. If the GTT concept of one rule is contained by the GTT
annotation of a book, then the rule is fired. As several rules can be fired for a same book,
the union of the consequents of these rules forms the translated Brinkman annotation of
the book.</p>
        <p>On a set of books selected for evaluation, the generated concepts for a book are then
compared to the ones that are deemed as correct for this book. At the book level, we
measure how many books have a rule fired on them, and how many of them are actually
matched books, i.e., books for which the generated Brinkman annotation contains at
least one correct concept. These two figures give a precision (Pb) and a recall (Rb) for
this book level.</p>
        <p>At the annotation level, we measure (i) how many translated concepts are correct
over the annotation produced for the books on which rules were fired (Pa), (ii) how
many correct Brinkman annotation concepts are found for all books in the evaluation set
(Ra), and (iii) a combination of these two, namely a Jaccard overlap measure between
the produced annotation (possibly empty) and the correct one (Ja).</p>
        <p>The ultimate measure for alignment quality here is at the annotation level.
Measures at the book level are used as a raw indicator of users’ (dis)satisfaction with the
built system. A Rb of 60% means that the alignment does not produce any useful
candidate concept for 40% of the books. We would like to mention that, in these formulas,
results are counted on a book and annotation basis, and not on a rule basis. This reflects
the importance of different thesaurus concepts: a translation rule for a frequently used
concept is more important than a rule for a rarely used concept. This option suits the
application context better.</p>
        <p>Manual evaluation. Last year, we evaluated the results of the participants in two
ways, one manual – KB indexers evaluating the generated indices – and one automatic –
using books indexed against both GTT and Brinkman. This year, we have not performed
manual investigation. Findings of last year can be found in [12].</p>
        <p>Automatic evaluation and results. Here, the reference set consists of 81,632
dually-indexed books forming the test set presented in Section 8.1. The existing
Brinkman indices from these books are taken as a reference to which the results of
annotation translation are automatically compared.</p>
        <p>The upper part of Table 14 gives an overview of the evaluation results when we only
use the exactMatch correspondences. DSSim and TaxoMap perform similarly in
precision, and much ahead of Lily. If precision almost reaches last year’s best results, recall
is much lower. Less than one third of the books were given at least one correct Brinkman
concept in the DSSim case. At the annotation level, half of the translated concepts are
not validated, and more than 75% of the real Brinkman annotation is not found. We
already pointed out that the correspondences from DSSim are mostly generated by lexical
similarity. This indicates, as last year, that lexically equivalent correspondences alone
do not solve the annotation translation problem.</p>
        <p>Participant</p>
        <p>DSSim</p>
        <p>Lily</p>
        <p>TaxoMap
TaxoMap+broadMatch</p>
        <p>TaxoMap+hierarchical
TaxoMap+all correspondences</p>
        <p>Among the three participants, only TaxoMap generated broadMatch and
narrowMatch correspondences. To evaluate their usefulness for annotation
translation, we evaluated their influence when they were added to a common set of rules. As
shown in the four TaxoMap lines in Table 14, the use of broadMatch, narrowMatch
and relatedMatch correspondences slightly increases the chances of having a book
given a correct annotation. However, this unsurprisingly results in a loss of precision.
The first comment on this track concerns the form of the alignment returned by the
participants, especially with respect to the type and cardinality of alignments. All three
participants proposed alignments using the SKOS links we asked for. However, only
one participants proposed hierarchical broader, narrower and related links.
Experiments show that these links can be useful for the application scenarios at hand. The
broader links are useful to attach concepts which cannot be mapped to an equivalent
corresponding concept but a more general or specific one. This is likely to happen, since
the two thesauri have different granularity but a same general scope.</p>
        <p>This actually mirrors what happened in last year’s campaign, where only one
participant had given non-exact correspondence links – even though it was relatedMatch
then. Evaluation had shown that even though the general quality was lowered by
considering them, the loss of precision was not too important, which could make these links
interesting for some application variants, e.g. semi-automatic re-indexing.</p>
        <p>Second, there is no precise handling of one-to-many or many-to-many alignments,
as last year. Sometimes a concept from one thesaurus is mapped to several concepts
from the other. This proves to be very useful, especially in the annotation translation
scenario where concepts attached to a book should ideally be translated as a whole.</p>
        <p>Finally, one shall notice the low coverage of alignments with respect to the thesauri,
especially GTT: in the best case, only 2,930 of its 35K concepts were linked to some
Brinkman concept, which is less than last year (9,500). This track, arguably because of
its Dutch language context, is difficult. We had hoped that the release of a part of the
set of KB’s dually indexed books would help tackle this difficulty, as previous year’s
campaign had shown promising results when exploiting real book annotations.
Unfortunately none of this year’s participants have used this resource.
9</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Very large crosslingual resources</title>
      <p>The goal of the Very Large Crosslingual Resources task is twofold. First, we are
interested in the alignment of vocabularies in different languages. Many collections
throughout Europe are indexed with vocabularies in languages other than English. These
collections would benefit from an alignment to resources in other languages to broaden the
user group, and possibly enable integrated access to the different collections.</p>
      <p>Second, we intend to present a realistic use case in the sense that the resources
are large, rich in semantics but weak in formal structure, i.e., realistic on the Web. For
collections indexed with an in-house vocabulary, the link to a widely-used and rich
resource can enhance the structure and increase the scope of the in-house thesaurus.
Three resources are used in this task:
GTAA The GTAA is a Dutch thesaurus used by the Netherlands Institute for Sound
and Vision to index their collection of TV programs. It is a facetted thesaurus, of
which we use the following four themes: (1) Subject: the topic of a TV program,
3800 terms; (2) People: the main people mentioned in a TV program, 97:000
terms; Names: the main “Named Entities” mentioned in a TV program
(Corporation names, music bands, etc.), 27:000 terms; Location: the main locations
mentioned in a TV program or the place where it has been created, 14:000 terms.
WordNet WordNet is a lexical database of the English language developed at Princeton
University13. Its main building blocks are synsets: groups of words with a
synonymous meaning. In this task, the goal is to match noun-synsets. WordNet contains 7
types of relations between noun-synsets, but the main hierarchy in WordNet is built
on hyponym relations, which are similar to subclass relations. W3C has translated
WordNet version 2.0 into RDF/OWL14.</p>
      <p>The original WordNet model is a rich and well-designed model. However, some
tools may have problems with the fact that the synsets are instances rather
than classes. Therefore, for the purpose of this OAEI task, we have
translated the hyponym hierarchy in a skos:broader hierarchy, making the synsets
skos:Concepts.</p>
      <p>DBpedia DBPedia contains 2.18 million resources or “things”, each tied to an article in
the English language Wikipedia. The “things” are described by titles and abstracts
in English and often also in other languages, including Dutch. DBPedia “things”
have numerous properties, such as categories, properties derived from the wikipedia
‘infoboxes’, links between pages within and outside wikipedia, etc. The purpose of
this task is to map the DBPedia “things” to WordNet synsets and GTAA concepts.
13 http://wordnet.princeton.edu/
14 http://www.w3.org/2006/03/wn/wn20/
We evaluate the results of the three alignments (GTAA-WordNet, GTAA-DBPedia,
WordNet-DBPedia) in terms of precision and recall. We present measures for each
GTAA facet separately, instead of a global value, because each facet could lead to very
different performance.</p>
      <p>In the precision and recall calculations, we use a kind of semantic distance; we take
into account the distance between a correspondence that we find in the results and the
ideal correspondence that we would expect for a certain concept. For each equivalence
relation between two concepts in the results, we determine if (i) one is equivalent to the
other, (ii) one is a broader/narrower concept than the other, (iii) one is in none of the
above ways related to the other. In case (i) the correspondence counts as 1, in case (ii)
the correspondence counts as 0.5 and in case (iii) as 0.</p>
      <p>Precision We take samples of 100 correspondences per GTAA facet for both the
GTAA-DBPedia and the GTAA-WordNet alignments and evaluate their correctness in
terms of exact match, broader, narrower or related match, or no match. The alignment
between WordNet and DBPedia is evaluated by inspection of a random sample of 100
correspondences.</p>
      <p>Recall Due to time constraints, we only determine recall of two of the four GTAA
facets: People and Subjects. These are the most extreme cases in terms of size and
precision values. We create a small reference alignment from a random sample of 100 GTAA
concepts per facet, which we manually map to WordNet and DBPedia. The result of the
GTAA-WordNet and GTAA-DBPedia alignments are compared to the reference
alignments. We do not provide a recall measure for the DBPedia-WordNet correspondence.
9.3
Only one participant, DSSim, participated in the VLCR task. The evaluation of the
results therefore focuses on the differences between the three alignments, and the four
facets of the GTAA. Table 15 shows the number of concepts in each resource and the
number of correspondences returned for each resource pair. The largest number of
correspondences was found between DBpedia and WordNet (28,974), followed by
GTAADBPedia (13,156) and finally GTAA-WordNet (2,405). We hypothesize that the low
number of the latter pair is due to the multilingual nature. Except for 9 concepts, all
GTAA concepts that were mapped to DBPedia were also mapped to WordNet.</p>
      <p>Precision The precision of the GTAA-DBPedia alignment is higher than that of the
GTAA-WordNet alignment. A possible explanation is the high number of
disambiguation errors for WordNet, which is much finer grained than for GTAA or DBPedia.</p>
      <p>A remarkable difference can be seen in the People facet. It is the worst scoring facet
in the GTAA-WordNet alignment (10%), while it is the best facet in GTAA-DBPedia
(94%). Inspection of the results revealed what caused the many mistakes for
WordNet: almost none of the people in GTAA are present in WordNet. Instead of giving up,
DSSim continues to look for a correspondence and maps the GTAA person to a lexically
similar word in WordNet. This problem is apparently not present in DBPedia. Although
we do not yet fully understand why not, an important factor is that more Dutch people
are represented in DBPedia.</p>
      <p>Vocabulary
#concepts
#corr to WN
#corr to DBP
#corr to GTAA
Wordnet
DBPedia
GTAA
Facet:</p>
      <p>Subject
Person
Name
Location</p>
      <p>Fig. 11. Estimated precision of the alignment between GTAA and DBpedia (left) and WordNet
(right).</p>
      <p>Apart from the People facet, the differences between the facets are consistent over
the GTAA-DBPedia and GTAA-WordNet alignments. Subjects and Locations score
high, Names somewhat less.</p>
      <p>The alignment between DBPedia and WordNet had a precision of 45%. DBPedia
contains type links (wordnet-type and rdf:type) to WordNet synsets. There was no
overlap between the alignment submitted by DSSim and these existing links.</p>
      <p>Recall We created reference alignments by matching samples of 100 concepts from
the People and Subjects facets to both DBPedia and WordNet. However, none of the
People in our sample of 100 GTAA People could be mapped to WordNet. Therefore,
recall for this particular alignment could not be detemined.
Fig. 12. Estimated coverage (left) and recall (right) for the alignments between the Subject facet
of GTAA and DBpedia and WordNet, and for the alignment between the People facet of GTAA
and DBpedia.</p>
      <p>Figure 12 shows how many of the GTAA Subject and People in our reference
alignment were also found by DSSim. We call this coverage. The second figure depicts how
many GTAA concept in our reference alignment were found by DSSim to the exact
same DBPedia/WordNet concept, which is the conventional definition of recall. All
three alignments had a similar recall score of aroud 20%.
9.4</p>
      <sec id="sec-7-1">
        <title>Summary of the results</title>
        <p>Tables 16 and 17 summarize the result.
Precision
Alignment
GTAA-DBPedia
GTAA-WordNet</p>
        <p>Subjects</p>
      </sec>
      <sec id="sec-7-2">
        <title>Other types of correspondence relations The VLCR task once more confirmed what</title>
        <p>was already known: more correspondence types are necessary than only exact matches.
While inspecting alignments, we found many cases where a link between two concepts
seems useful for a number of applications, without being equivalent. For example:</p>
        <sec id="sec-7-2-1">
          <title>Subject:pausbezoeken15</title>
          <p>and List_of_pastoral_visits_of_Pope_John_Paul_II_outside_Italy.
Location:Venezuela and synset-Venezuelan-noun-1
Subject:Verdedigingswerken16 and fortification</p>
          <p>Using context When looking at the types of mistakes that were made, it became
clear that a number of them could have been avoided by using the specific structure of
the resources being matched. The fact that the GTAA is organized in facets, for example,
can be used to disambiguate terms that appear both as a person and as a location. This
information is represented by the skos:inScheme property. Examples of incorrect
correspondences that might have been avoided if facet information was used are:
Person:GoghVincentvan -&gt; synset-vacationing-noun-1
Location:Harlem -&gt; synset-hammer-noun-8
Location:Melbourne -&gt; synset-Melbourne-noun-117</p>
          <p>Another example of resource-specific structure that could help matching are the
redirects between pages in Wikipedia or between “things” in DBPedia. DBPedia
contains things for which no other information is available than a ‘redirect’ property
pointing to another thing. The wikipedia page for “Gordon Summer” for example, is
immediately referred to the page for “Sting, the musician”. The titles of these referring pages
could well serve as alternative labels, and thus aid the correspondence between the gtaa
concept person:SummerGordon and the dbepdia thing Sting(musician).</p>
          <p>Of course, there is a trade-off between the amount of resource-specific features that
are taken into account and the general applicability of the matcher. However, some of
the features discussed above, such as facet information, are found in a wide range of
thesauri and are therefore serious candidates for inclusion in a tool.</p>
          <p>Reflection on the evaluation Deciding which synset or DBpedia thing is the most
suitable match for a GTAA concept is a non-trivial task, even for a human evaluator.
15 Pope visits, in English.
16 Defenses, in English.
17 This synset indeed refers to "a resort town in east central Florida".</p>
          <p>Often, multiple correspondences are reasonable. Therefore, the recall figures that are
based on a hand-made reference alignment give a possibly too negative impression of
the quality of the alignment. The evaluation task was further complicated because of the
‘related’ matches. There is a lack of clear definitions of when two concepts are related.</p>
          <p>Another factor that has to be considered when interpreting the precision and
recall figures, is the number of Dutch-specific concepts in the GTAA. For example, the
concept Name:Diogenes denotes a Dutch TV program instead of the ancient Greek.
Although the fact that Diogenes is in the Name facet and not in the People facet
provides a clue of its intended meaning, it could be argued that this type of Dutch-specific
concepts pose an unfair challenge to matchers.</p>
          <p>During the evaluation process, we found cases in which DSSim mapped to a
DBPedia disambiguation page instead of an actual article. We consider this to be incorrect,
since it leaves the disambiguation task to the user.
10</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Conference</title>
      <p>The conference track involves matching several ontologies from the conference
organization domain. Participant results have been evaluated along different modalities and a
consensus workshop aiming at studying the elaboration of consensus when establishing
reference alignments has been organised.
The collection consists of fifteen ontologies in the domain of organizing conferences.
Ontologies have been developed within the OntoFarm project18. In contrast to last year’s
conference track, there is one new ontology and several new methods of evaluation.</p>
      <p>The main features of this data set are:
– Generally understandable domain. Most ontology engineers are familiar with
organizing conferences. Therefore, they can create their own ontologies as well as
evaluate the alignments among their concepts with enough erudition.
– Independence of ontologies. Ontologies were developed independently and based
on different resources, they thus capture the issues in organizing conferences from
different points of view and with different terminologies.
– Relative richness in axioms. Most ontologies were equipped with description logic
axioms of various kinds, which opens a way to use semantic matchers.</p>
      <p>Ontologies differ in number of classes, of properties, in their expressivity, but also
in underlying resources. Ten ontologies are based on tools supporting the task of
organizing conferences, two are based on experience of people with personal participation
in conference organization, and three are based on web pages of concrete conferences.</p>
      <p>Participants had to provide either complete alignments or interesting
correspondences (nuggets), for all or some pairs of ontologies. Participants could also take part in
two different tasks. First, participants could find correspondences without any specific
18 http://nb.vse.cz/~svatek/ontofarm.html
application context given (generic correspondences). Second, participants could find
out correspondences with regard to an application scenario: transformation application.
This means that final correspondences are to be used for conference data transformation
from one software tool for organizing conference to another one.</p>
      <p>This year, results of participants were evaluated by five different methods:
evaluation based on manual labeling, reference alignments, data mining method, logical
reasoning, and on consensus of experts.</p>
      <sec id="sec-8-1">
        <title>Evaluation and results</title>
        <p>We had three participants. All of them delivered generic correspondences. Aside from
results from evaluation methods (sections below) we deliver some simple observations
about participants:
– DSSim and Lily delivered in total 105 alignments. All ontologies were matched to
each other. ASMOV delivered 75 alignments. For our evaluation we do not consider
alignments in which ontologies were matched to themselves.
– Two participants delivered correspondences with certainty factors between 0 and
1 (ASMOV and Lily); one (DSSim) delivered correspondences with confidence
measures 0 or 1, where 0 is used to describe a correspondence as negative.
– DSSim and Lily delivered only equivalence, e.g., no subsumption, relations, while</p>
        <p>ASMOV also provided subsumption relations19.
– All participants delivered class-to-class correspondences and property-to-property
correspondences.</p>
        <p>Evaluation based on manual labeling This kind of evaluation is based on
sampling and manual labeling of random samples of correspondences because the number
of all distinct correspondences is quite high. Particularly, we followed the method of
Stratified random sampling described in [20]. Correspondences of each participant were
divided into three subpopulations (strata) according to confidence measures20. For each
stratum we randomly chose 75 correspondences in order to have 225 correspondences
for manual labeling for each system; except the one stratum of the DSSim system with
150 correspondences.</p>
        <p>In Table 18 there are data for each stratum and system where Nh is the size of
the stratum, nh is the number of sample correspondences from the stratum, TP is the
number of correct correspondences from sample from the stratum, and Ph is an
approximation of precision for the correspondences in the stratum. Furthermore, based on
the assumption that this adheres to binomial distribution we computed margin of
errors (with confidence of 95%) for the approximated precision for each system based on
equations from [20]. In Table 19 there are measures for the entire populations. We
computed approximated precision P* in the entire population as weighted average from the
approximated precisions of each strata. Finally, we also computed so-called ‘relative’
19 Finally, no current evaluation methods did take into account subsumption correspondences.</p>
        <p>Considering these correspondences in evaluation methods is our plan for next year of the
conference track.
20 DSSim provided merely ‘certain’ correspondences, so there is just one stratum for this system.</p>
        <p>(0,0.3]
ASMOV</p>
        <p>(0.3,0.6]</p>
        <p>ASMOV Lily
system</p>
        <p>Nh
nh
TP
Ph
779
75
16
21%
12%</p>
        <p>Lily
426
75
33
44%
12%
349
75
38
51%
12%
911
75
27
36%
12%</p>
        <p>ASMOV
135
75
51
68%
12%
(0.6,1.0]</p>
        <p>Lily
407
75
39
52%
12%</p>
        <p>DSSim
1950
150
46
30%
8%
34%
18%
10% 30%
14%
8% 42%
10%
17%
recall (rrecall) that is computed as ratio of the number of all correct correspondences
(sum of all correct correspondences per one system) to the number of all correct
correspondences found by any of systems (per all systems). This relative recall was computed
over stratified random samples, so it is rather sample relative recall.</p>
        <p>Discussion Although the ASMOV system achieves the highest result in two strata
and the Lily system in the approximated precision P*, because of overlapping margins
of errors we cannot say that a system outperforms another. In order to make
approximated results more decisive we should take larger samples. Regarding relative recall,
ASMOV achieves the highest value.</p>
      </sec>
      <sec id="sec-8-2">
        <title>Evaluation based on reference alignments This is the classical evaluation method</title>
        <p>where the alignments from participants are compared against the reference alignment.
So far we have built the reference alignment over five ontologies (cmt, confOf, ekaw,
iasted, sigkdd, i.e. 10 alignments); we plan to cover the whole collection in the future.
The decision about each correspondence was based on majority vote of three
evaluators. In the case of disagreement among evaluators, the given correspondence was the
subject of broader public discussion during the Consensus building workshop in order
to find consensus and update the reference alignment, see the section (below) about the
Evaluation based on the consensus of experts.</p>
        <p>ASMOV
DSSim</p>
        <p>Lily</p>
        <p>P
51.8%
34.0%
43.2%
t=0.2</p>
        <p>R
38.6%
57.9%
50.0%</p>
        <p>F-meas
44.2%
42.9%
46.3%</p>
        <p>P
72.2%
34.0%
60.4%
t=0.5</p>
        <p>R
11.4%
57.9%
28.1%</p>
        <p>F-meas
19.7%
42.9%
38.3%</p>
        <p>P
100.0%
34.0%
66.7%
t=0.7</p>
        <p>R
6.1%
57.9%
8.8%</p>
        <p>F-meas
11.6%
42.9%
15.5%</p>
        <p>In Table 20, there are traditional precision (P), recall (R), and F-measure (F-meas)
computed for three diverse thresholds (0.2, 0.5, and 0.7). As we have mentioned, these
results are biased because the current reference alignment only covers a subset of all
ontology pairs from the OntoFarm collection.</p>
        <p>Discussion All systems achieve the highest F-measure for threshold 0.2, while the
Lily system has the highest F-measure of 46.3%. The ASMOV system achieves the
highest precision for each of three thresholds (51.8%, 72.2%, 100%) however it is at
the expense of recall that is the lowest for each of three thresholds (38.6%, 11.4%,
6.1%). The highest recall (57.9%) was obtained by the DSSim system.</p>
      </sec>
      <sec id="sec-8-3">
        <title>Evaluation based on data mining method This kind of evaluation is based on data</title>
        <p>mining, and the goal is to reveal non-trivial findings about the participating systems.
These findings relate to the relationships between the particular system and features
such as the confidence measure, validity, kinds of ontologies, particular ontologies, and
mapping patterns. Mapping patterns have been introduced in [19]. For the purpose of
our current experiment we extended detected mapping patterns with some patterns
inspired by correspondence patterns [16] and with error mapping patterns.</p>
        <p>Basically, mapping patterns are patterns dealing with (at least) two ontologies.
These patterns reflect the the structure of ontologies on the one side, and on the other
side they include correspondences between entities of ontologies. Initially, we discover
some mapping patterns such as occurrences of some complex structures in the
participants results. They are neither the result of a deliberate activity of humans, nor they
are a priori ‘desirable’ or ‘undesirable’. Here are three such mapping patterns between
concepts:
– MP1 (Parent-child triangle): it consists of an equivalence correspondence between
A and B and an equivalence correspondence between A and a child of B, where A
and B are from different ontologies.
– MP2 (Mapping along taxonomy): it consists of simultaneous equivalence
correspondences between parents and between children.
– MP3 (Sibling-sibling triangle): it consists of simultaneous correspondences
between class A and two sibling classes C and D where A is from one ontology
and C and D are from another ontology.</p>
        <p>This year, we added three mapping patterns inspired by correspondence patterns [16]:
– MP4: it is inspired by the ‘class by attribute’ correspondence pattern, where the
class in one ontology is restricted to only those instances having a particular value
for a a given attribute/relation.
– MP5: it is inspired by the ‘composite’ correspondence pattern. It consists of a
classto-class equivalence correspondence and a property-to-property equivalence
correspondence, where classes from the first correspondence are in the domain or in the
range of properties from the second correspondence.
– MP6: it is inspired by the ‘attribute to relation’ correspondence pattern where a
datatype and an object property are aligned as an equivalence correspondence.
Furthermore, there are error mapping patterns, which can disclose incorrect
correspondences:
– MP7: it is the variant of MP5 ‘composite pattern’. It consists of an equivalence
correspondence between two classes and an equivalence correspondence between
two properties, where one class from the first correspondence is in the domain and
another class from that correspondence is in the range of equivalent properties,
except the case where domain and range is the same class.
– MP8: it consists of an equivalence correspondence between A and B and an
equivalence correspondence between a child of A and a parent of B where A and B are
from different ontologies. It is sometimes reffered to as criss-cross pattern.
– MP9: it is the variant of MP3, where the two sibling classes C and D are disjoint.</p>
        <p>MP1</p>
        <p>MP2</p>
        <p>MP3</p>
        <p>MP4</p>
        <p>MP5</p>
        <p>MP6</p>
        <p>MP7 MP8 MP9
ALL 0/543/0 255/146/115 0/527/0 261/828/354 467/115/585 132/115/151 0/6/13 0/7/4 0/165/0
REF 0/70/0 39/19/17 0/58/0 35/88/35 51/6/29 1/2/3 0/0/0 0/3/0 0/27/0</p>
        <p>In Table 21 there are numbers of correspondences found by each system
(ASMOV/DSSim/Lily) that belong to a particular mapping pattern. The row ‘ALL’ relates
to all equivalence correspondences delivered by participants with confidence measure
higher than 0.0 (1540/1950/1744). The row ‘REF’ relates to all equivalence
correspondences delivered by participants with confidence measure higher than 0.0 for pairs of
ontologies for which there exists the reference alignment (182/194/132).</p>
        <p>For the data-mining analysis we employed the 4ft-Miner procedure of the
LISpMiner data mining system21 for mining of association rules. For the sake of brevity we
mention a few examples of interesting association hypotheses discovered22:
– In correspondences with low confidence measure [0,0.4) the ASMOV system
comes 1.2 times more often with incorrect correspondences for cmt and confOf
pair of ontologies than all systems with such (incorrect) correspondences for those
two ontologies with all confidence measures (on average).
– The Lily system outputs almost three times more often correspondences that belong
to the mapping pattern MP7 than do all systems (on average).
– In correspondences with low confidence measure [0,0.4) the Lily system comes 1.2
times more often with correct correspondences for pairs of ontologies with iasted
ontology than all systems with such (correct) correspondences for those pairs of
ontologies with all confidence measures (on average).</p>
        <p>Discussion The abovementioned hypotheses disclose potentially interesting
relationships for the developers of systems. By Table 21 (particularly numbers for MP7,
MP8, and mainly for MP9) we could say that application of error mapping patterns
21 http://lispminer.vse.cz/
22 For association hypotheses with confidence measures we used REF correspondences,
otherwise we used ALL correspondences.
would improve the systems’ performance (for Lily to some degree and especially for
DSSim) in terms of precision, while the results of the ASMOV system do not contain
any instances of error mapping patterns due to its semantic verification phase.</p>
        <p>Evaluation based on alignment incoherence Several ways to measure the
incoherence of an alignment have been proposed in [13]. In the following we focus on the
maximum cardinality measure mtcard which has been introduced as revision based
measure. The mtcard measure compares the number of correspondences which have to be
removed to arrive at a coherent subset with the number of all correspondences in the
alignment. The conference ontologies are well suited for an analysis of alignment
incoherence since most of them contain negation as well as different kinds of restrictions
exploiting the range of OWL-DL expressivity.</p>
        <p>Due to practical considerations we decided to modify the approach with respect to
two aspects. First, we observed that many logical problems induced by an alignment
are related to properties. Therefore, we applied a different definition of incoherence
taking property unsatisfiability into account. We defined an ontology to be incoherent
whenever there exists an unsatisfiable concept or property. This extends the classical
approach in which ontology incoherence depends only on the unsatisfiability of
concepts (see for example [14]). Second, we observed that matching object properties on
datatype properties might be an appropriate way to cope with semantic heterogeneity.
Nevertheless, such a correspondence would directly result in an incoherent alignment
based on the direct natural translation of a correspondence as axiom. Therefore, we used
a slightly modified variant of the natural translation and translated each correspondence
between properties R1 and R2 into an axiom 9R1:&gt; 9R2:&gt; (we only considered
equivalence correspondences).</p>
        <p>System
ASMOV
Lily
DSSim</p>
        <p>Alignments
44 (1010)
45 (851)
45 (769)</p>
        <p>Coherent
8
9
3</p>
        <p>Mean
0.135
0.138
0.206</p>
        <p>Median
0.14
0.145
0.166</p>
        <p>In our experimental evaluation we considered only a subset of 10 ontologies and
evaluated the alignments between all possible pairs. We excluded five ontologies
(Cocus, Confious, Iasted, Paperdyne and OpenConf) because we only focused on
alignments submitted by each participant and encountered reasoning problems for some of
these ontologies. Table 22 summarizes the main results. First of all we notice that only
a small fraction of submitted alignments is coherent. For ASMOV and Lily 18% resp.
20% of the evaluated alignments were coherent, while DSSim generated only 7%
coherent alignments. We also computed the mean of the mtcard measure over all analyzed
alignments. We observe that ASMOV and Lily generate alignments with a lower degree
of incoherence (0:135 and 0:138) compared to DSSim (0:206).</p>
        <p>The distribution of measured values additionally supports our first impression.
Figure13 shows the second and third quartile as well as the median of the values
measured via mtcard . While Lily and especially ASMOV found a way to prevent highly
incoherent alignments, 25% of the alignments generated by DSSim have a degree of
incoherence greater or equal than 0:288. For each of these alignments there are logical
reasons to remove at least one-fourth of its correspondences. The differences between
ASMOV, Lily and DSSim revealed by our incoherence analysis fits with the differences
we reported on the occurence of the error mapping patterns MP7 to MP9.</p>
        <p>Fig. 13. Distribution of mtcard values, depicting second quartile, median, and third quartile.</p>
        <p>Discussion Some of the participants implemented a component to debug or validate
generated alignments, namely ASMOV and Lily. To our knowledge these debugging
techniques are based on detecting certain structural patterns in correspondence pairs
(MP7 to MP9 can be seen as examples of such patterns). Although these strategies
cannot ensure the coherence of an alignment, such an approach is nevertheless an efficient
way to avoid full-fledged reasoning while increasing the degree of coherence. Taking
alignment coherence into account can be a useful guide for improving the results of a
matching system and our results suggest that there is still room for improvement.</p>
        <p>Evaluation based on consensus of experts During so-called Consensus building
workshop we discussed 5 controversial correspondences. The main goal of this
discussion among experts was to find consensus about those correspondences and track
arguments against and favour. This session ratified insights from previous years and
disclosed that finding consensus is time-consuming and not an easy activity however
doable. Some other relevant topics were raised. For instance, open-world assumption
vs. closed-world assumption was considered as an important factor for understanding
the description of entities in ontologies. The need for expressive alignments also arouse
for expressing complex correspondences combining several elements (classes or
properties). The reached consensus is captured in the reference alignment and discussion
can be further proceed in the blog23.
23 http://keg.vse.cz/oaei/
In conclusion, we evaluated participant results from diverse perspectives via five distinct
evaluation methods. For next year of this track, we also plan to evaluate subsumption
correspondences and further extend the reference alignment. Based on the participants’
feedback we changed ontologies from the OntoFarm collection in order to be OWL DL
compliant for the next year of the conference track.
11</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Lesson learned and suggestions</title>
      <p>The lessons learned for this year are relatively similar to those of previous years. But
there remain lessons not really taken into account that we identify with an asterisk (*).
We reiterate those lessons that still apply with new ones:
A) Unfortunately, we have not been able to maintain the better schedule of last year.</p>
      <p>With the schedule reduced by one month (thus in overall having about 3 months),
it is very difficult to run OAEI.</p>
      <p>B) Some of the best systems of last year did not enter. The invoked reasons were:
not enough time and/or no improvement in the systems. This pleads for continous
instead of yearly evaluation.</p>
      <p>C) The trend that there are more matching systems able to enter such an evaluation
seems to slow down. However, the number of tracks the existing systems are able
to consider still very encouraging for the progress of the field.</p>
      <p>D) We can confirm that systems that enter the campaign for several times tend to
improve over years.</p>
      <p>E*) The benchmark test case is not discriminant enough between systems. It is still
useful for evaluating the strengths and weaknesses of algorithms but does not seem
to be sufficient anymore for comparing algorithms. We have improved tests this
year, while preserving comparability with previous years, but more is required, in
particular in automatic test generation.</p>
      <p>F) We have had more proposals for test cases this year. However, the difficult lesson is
that proposing a test case is not enough, there is a lot of remaining work in preparing
the evaluation. Fortunately, with tool improvements, it becomes easier to perform
the evaluation.</p>
      <p>G) There are now test cases where non equivalence-only alignments matter and there
are systems, e.g., ASMOV, Aroma, TaxoMap, which are able to deliver such
alignments. We thus intent to have such a test case next year. The discussion about
instance matching tests has also aroused.</p>
      <p>H) The robustness of evaluation tools make that, like last year, we had very few
syntactic problems this year. However, it seems that many matchers are too dependent on
particular operating systems and still many ones do not deal correctly with ontology
URIs (see the Error cells in Table 3).</p>
      <p>I) The partition between systems able to deal with large ontologies and systems
unable to do it seems to be transforming gradually: systems seem to be able to perform
more tasks. However, this requires an important amount of manpower.
Future plans for the Ontology Alignment Evaluation Initiative are certainly to go ahead
and to improve the functioning of the evaluation campaign. This involves:
– Finding new real world test cases, especially with expressive ontologies;
– Improving the tests along the lesson learned;
– Accepting continuous submissions (through validation of the results);
– Improving the measures to go beyond precision and recall (we have done this for
generalized precision and recall as well as for using precision/recall graphs, and
will continue with other measures);
– Developing a definition of test hardness.</p>
      <p>Of course, these are only suggestions that will be refined during the coming year,
see [17] for a detailed discussion on the ontology matching challenges.
13
This year we had less systems overall entering the evaluation campaign with still a
significant number of systems. It seems however that they entered more tests
individually (50 last year overall against 48 this year), so systems seem to be more up to the
challenge.</p>
      <p>As noticed the previous years, systems which do not enter for the first time are those
which perform better. This shows that, as expected, the field of ontology matching is
getting stronger (and we hope that evaluation has been contributing to this progress).</p>
      <p>All participants have provided description of their systems and their experience in
the evaluation. These OAEI papers, like the present one, have not been peer reviewed.
However, they are full contributions to this evaluation exercise and reflect the hard work
and clever insight people put in the development of participating systems. Reading the
papers of the participants should help people involved in ontology matching to find what
makes these algorithms work and what could be improved. Sometimes participants offer
alternate evaluation results.</p>
      <p>The Ontology Alignment Evaluation Initiative will continue these tests by
improving both test cases and testing methodology for being more accurate. Further
information can be found at:</p>
      <sec id="sec-9-1">
        <title>Acknowledgments</title>
        <p>We warmly thank each participant of this campaign. We know that they have worked
hard for having their results ready and they provided insightful papers presenting their
experience. The best way to learn about the results remains to read the following papers.</p>
        <p>We are grateful to Martin Ringwald and Terry Hayamizu for providing the reference
alignment for the anatomy ontologies.</p>
        <p>Thanks to Andrew Bagdanov, Aureliano Gentile, Gudrun Johannsen (Food and
Agriculture Organization of the United Nations) for evaluating the FAO task. We also
thank the teams of Agricultural Organization of the United Nations (FAO) for allowing
us to use their ontologies. Caterina Caraciolo and Jérôme Euzenat have been partially
supported by the European integrated project NeOn (IST-2005-027595).</p>
        <p>We are grateful to Henk Matthezing, Lourens van der Meij and Shenghui Wang who
have made crucial contributions to implementation and reporting for the Library track.
The evaluation at KB could not have been possible without the commitment of Yvonne
van der Steen, Irene Wolters, Maarten van Schie, and Erik Oltmans.</p>
        <p>We thank Chris Bizer, Fabian Suchanec and Jens Lehman for their help with the
DBPedia dataset. We also thank Willem van Hage for his advices. We gratefully
acknowledge the Dutch Institute for Sound and Vision for allowing us to use the GTAA.</p>
        <p>We are grateful to Peter Bartoš (Brno University of Technology, CZ) for
participating in creation of partial reference alignment for the conference track. In
addition, Ondrˇej Šváb-Zamazal and Vojteˇch Svátek were supported by the IGA VSE grant
no.20/08 “Evaluation and matching ontologies via patterns”.</p>
        <p>We also thank the other members of the Ontology Alignment Evaluation
Initiative Steering committee: Wayne Bethea (John Hopkins University, USA), Alfio
Ferrara (Università degli Studi di Milano, Italy), Lewis Hart (AT&amp;T, USA), Tadashi
Hoshiai (Fujitsu, Japan), Todd Hughes (DARPA, USA), Yannis Kalfoglou (University
of Southampton, UK), John Li (Teknowledge, USA), Miklos Nagy (The Open
University (UK), Natasha Noy (Stanford University, USA), Yuzhong Qu (Southeast University
(China), York Sure (University of Karlsruhe, Germany), Jie Tang (Tsinghua University
(China), Raphaël Troncy (CWI, Amsterdam, The Netherlands), Petko Valtchev
(Université du Québec à Montréal, Canada), and George Vouros (University of the Aegean,
Greece).
8. Jérôme Euzenat, Antoine Isaac, Christian Meilicke, Pavel Shvaiko, Heiner Stuckenschmidt,
Ondrej Svab, Vojtech Svatek, Willem Robert van Hage, and Mikalai Yatskevich. Results of
the ontology alignment evaluation initiative 2007. In Pavel Shvaiko, Jérôme Euzenat, Fausto
Giunchiglia, and Bin He, editors, Proceedings of the 2nd ISWC international workshop on
Ontology Matching, Busan (KR), pages 96–132, 2007.
9. Fausto Giunchiglia, Mikalai Yatskevich, Paolo Avesani, and Pavel Shvaiko. A large scale
dataset for the evaluation of ontology matching systems. The Knowledge Engineering Review
Journal, (24(2)), 2009, to appear.
10. Ryutaro Ichise, Masahiro Hamasaki, and Hideaki Takeda. Discovering relationships among
catalogs. In Proceedings of the 7th International Conference on Discovery Science, pages
371–379, Padova (IT), 2004.
11. Ryutaro Ichise, Hideaki Takeda, and Shinichi Honiden. Integrating multiple internet
directories by instance-based learning. In Proceedings of the 18th International Joint Conference
on Artificial Intelligence (IJCAI), pages 22–28, Acapulco (MX), 2003.
12. Antoine Isaac, Henk Matthezing, Lourens van der Meij, Stefan Schlobach, Shenghui Wang,
and Claus Zinn. Putting ontology alignment in context: Usage scenarios, deployment and
evaluation in a library case. In Proceedings of the 5th European Semantic Web Conference
(ESWC), pages 402–417, Tenerife (ES), 2008.
13. Christian Meilicke and Heiner Stuckenschmidt. Incoherence as a basis for measuring the
quality of ontology mappings. In Proceedings of the 3rd ISWC international workshop on
Ontology Matching, pages 1–12, Karlsruhe (DE), 2008.
14. Guilin Qi and Anthony Hunter. Measuring incoherence in description logic-based ontologies.</p>
        <p>In Proceedings of the 6th International Semantic Web Conference (ISWC), pages 381–394,
Busan (KR), 2007.
15. Marta Sabou, Mathieu d’Aquin, and Enrico Motta. Using the semantic web as background
knowledge for ontology mapping. In Proceedings of the ISWC international workshop on
Ontology Matching, pages 1–12, Athens (GA US), 2006.
16. Francois Scharffe and Dieter Fensel. Correspondence patterns for ontology alignment. In
Proceedings of the 16th International Conference on Knowledge Acquisition, Modeling and
Management (EKAW), pages 83–92, Acitrezza (IT), 2008.
17. Pavel Shvaiko and Jérôme Euzenat. Ten challenges for ontology matching. In Proceedings of
the 7th International Conference on Ontologies, DataBases, and Applications of Semantics
(ODBASE), pages 1164–1182, Monterrey (MX), 2008.
18. York Sure, Oscar Corcho, Jérôme Euzenat, and Todd Hughes, editors. Proceedings of the</p>
        <p>ISWC workshop on Evaluation of Ontology-based tools (EON), Hiroshima (JP), 2004.
19. Ondrej Svab, Vojtech Svatek, and Heiner Stuckenschmidt. A study in empirical and
‘casuistic’ analysis of ontology mapping results. In Proceedings of the 4th European Semantic Web
Conference (ESWC), pages 655–669, Innsbruck (AU), 2007.
20. Willem Robert van Hage, Antoine Isaac, and Aleksovski, Zharko. Sample evaluation of
ontology matching systems. In Proceedings of the ISWC workshop on Evaluation of Ontologies
and Ontology-based tools, pages 41–50, Busan (KR), 2007.</p>
        <p>Roma, Grenoble, Tokyo, Amsterdam, Trento, Mannheim, and Prague, December 2008</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Zharko</given-names>
            <surname>Aleksovski</surname>
          </string-name>
          , Warner ten Kate, and Frank van Harmelen.
          <article-title>Exploiting the structure of background knowledge used in ontology matching</article-title>
          .
          <source>In Proceedings of the ISWC international workshop on Ontology Matching</source>
          , pages
          <fpage>13</fpage>
          -
          <lpage>24</lpage>
          , Athens (GA US),
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Ben</given-names>
            <surname>Ashpole</surname>
          </string-name>
          , Marc Ehrig, Jérôme Euzenat, and Heiner Stuckenschmidt, editors.
          <source>Proceedings of the K-Cap workshop on Integrating Ontologies</source>
          , Banff (CA),
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Oliver</given-names>
            <surname>Bodenreider</surname>
          </string-name>
          , Terry F. Hayamizu, Martin Ringwald, Sherri De Coronado, and Songmao Zhang.
          <article-title>Of mice and men: Aligning mouse and human anatomies</article-title>
          .
          <source>In Proceedings of the American Medical Informatics Association (AIMA) Annual Symposium</source>
          , pages
          <fpage>61</fpage>
          -
          <lpage>65</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Marc</given-names>
            <surname>Ehrig</surname>
          </string-name>
          and
          <string-name>
            <given-names>Jérôme</given-names>
            <surname>Euzenat</surname>
          </string-name>
          .
          <article-title>Relaxed precision and recall for ontology matching</article-title>
          .
          <source>In Proceedings of the K-Cap workshop on Integrating Ontologies</source>
          , pages
          <fpage>25</fpage>
          -
          <lpage>32</lpage>
          , Banff (CA),
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Jérôme</given-names>
            <surname>Euzenat</surname>
          </string-name>
          .
          <article-title>An API for ontology alignment</article-title>
          .
          <source>In Proceedings of the 3rd International Semantic Web Conference (ISWC)</source>
          , pages
          <fpage>698</fpage>
          -
          <lpage>712</lpage>
          , Hiroshima (JP),
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Jérôme</given-names>
            <surname>Euzenat</surname>
          </string-name>
          , Malgorzata Mochol, Pavel Shvaiko, Heiner Stuckenschmidt, Ondrej Svab, Vojtech Svatek, Willem Robert van Hage,
          <string-name>
            <given-names>and Mikalai</given-names>
            <surname>Yatskevich</surname>
          </string-name>
          .
          <article-title>Results of the ontology alignment evaluation initiative 2006</article-title>
          . In Pavel Shvaiko, Jérôme Euzenat, Natalya Noy, Heiner Stuckenschmidt, Richard Benjamins, and Michael Uschold, editors,
          <source>Proceedings of the ISWC international workshop on Ontology Matching, Athens (GA US)</source>
          , pages
          <fpage>73</fpage>
          -
          <lpage>95</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Jérôme</given-names>
            <surname>Euzenat</surname>
          </string-name>
          and
          <string-name>
            <given-names>Pavel</given-names>
            <surname>Shvaiko</surname>
          </string-name>
          . Ontology matching. Springer, Heidelberg (DE),
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>