<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A web-based Evaluation Service for Ontology Matching</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>ome Euzenat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Meilicke</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Heiner Stuckenschmidt</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Cassia Trojahn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>INRIA &amp; LIG</institution>
          ,
          <addr-line>Grenoble</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Mannheim</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Evaluation of semantic web technologies at large scale, including ontology matching, is an important topic of semantic web research. This paper presents a web-based evaluation service for automatically executing the evaluation of ontology matching systems. This service is based on the use of a web service interface wrapping the functionality of a matching tool to be evaluated and allows developers to launch evaluations of their tool at any time on their own. Furthermore, the service can be used to visualise and manipulate the evaluation results. The approach allows the execution of the tool on the machine of the tool developer without the need for a runtime environment.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Evaluation of matching tools aims at helping designers and developers of such
tools to improve them and to help users to evaluate the suitability of the
proposed methods to their needs. The Ontology Alignment Evaluation Initiative
(OAEI)3 has been the basis for evaluation over the last years [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It is an annual
evaluation campaign that o ers several data sets organized by di erent groups
of researchers. However, additional e ort has to be made in order to catch up
with the growth of ontology matching technology. The SEALS project4 aims at
providing standardized datasets, evaluation campaigns for typical semantic web
tools and, in particular, a software infrastructure for automatically executing
evaluations. In this context, we have developed a web-based evaluation service
that allows developers to launch their own evaluations at any time while using a
set of approved datasets. It is based on the use of a web service interface
wrapping the functionality of a matching tool to be evaluated. In the following, we
describe the main components of our service and present a complete evaluation
example.
The evaluation service is composed of three main components: a web user
interface, a BPEL work ow and a set of web services. The web user interface is the
      </p>
      <sec id="sec-1-1">
        <title>3 http://oaei.ontologymatching.org/</title>
        <p>4 Semantic Evaluation at Large Scale http://about.seals-project.eu/
entry point to the application. This interface is deployed as a web application in
a Tomcat application-server behind an Apache web server. It invokes the BPEL
work ow, which is executed on the ODE5 engine. This engine runs as a web
application inside the application server.</p>
        <p>The BPEL process accesses several services that provide di erent
functionalities. The validation service ensures that the matcher web service is available
and full lls the minimal requirements to generate an alignment in the correct
format. A redirect service is used to redirect the request for running a matching
task to the matcher service endpoint. The test iterator service is responsible
for iterating over test cases and providing a reference to the required les. The
evaluation service provides measures such as precision and recall for evaluating
the alignments generated by the matching system. A result service is used for
storing evaluation results in a relational database.</p>
        <p>Once the web service matcher implementation has been deployed and
published at a stable endpoint by the tool developer, its matching method can be
invoked within the BPEL work ow. For that reason, an evaluation starts by
specifying the web service endpoint via the web interface. This data is then
forwarded to BPEL as input parameters. The complete evaluation work ow is
executed as a series of calls to the services listed above. The speci cation of the
web service endpoint becomes relevant for the invocation of the validation and
redirect services. They implement internally web service clients that connect to
the URL speci ed in the web user interface.</p>
        <p>Test and result services used in the BPEL process require to access
additional data resources. For accessing the test data, the test web service can access
the metadata describing the test suite, extracts the relevant information and
forwards the URLs of the required documents via the redirect service to the
matcher, which is currently evaluated. These documents can then be accessed
directly via a standard HTTP GET-request by the matching system. The result
web service uses a connection to the database to store the results for each
execution of an evaluation work ow. For visualizing and manipulating the stored
results we use an OLAP (Online Analytical Processing) application, which accesses
the database for retrieving the evaluation results. Results can be re-accessed at
any time e.g., for comparing di erent tool versions against each other.
3</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Evaluating a Matching Tool</title>
      <p>
        For demonstration purposes, we have extended the Anchor-Flood [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] system
with the web service interface6. This system has participated in the two
previous OAEI campaigns and is thus a typical evaluation target. The current
version of the web application described in the following is available at http:
//seals.inrialpes.fr/platform/. At http://alignapi.gforge.inria.fr/
      </p>
      <sec id="sec-2-1">
        <title>5 ODE BPEL Engine http://ode.apache.org/</title>
        <p>6 Available at http://mindblast.informatik.uni-mannheim.de:8080/sealstools/
aflood/matcherWS?wsdl
tutorial/tutorial5/, there is complete information about how to create a
valid matcher.</p>
        <p>In order to start an evaluation, one must specify the URL of the matcher
service, the class implementing the required interface and the name of the matching
system to be evaluated (Figure 1). Three of the OAEI datasets have been
selected, namely Anatomy, Benchmark and Conference. In this speci c example,
we have used the conference test case.</p>
        <p>Submitting the form data, the BPEL work ow is invoked. It rst validates
the speci ed web service as well as its output format. In case of a problem, the
concrete validation error is displayed to the user as direct feedback. In case of a
successfully completed validation, the system returns a con rmation message and
continues with the evaluation process. Every time an evaluation is conducted,
results are stored under the enpoint address of the deployed matcher (Figure 2).</p>
        <p>The results are displayed as a table (Figure 3), when clicking on one of the
three evaluation IDs in Figure 2. The results table is (partially) available while
the evaluation itself is still running. By reloading the page from time to time,
the user can see the progress of an evaluation that is still running. In the results
table, for each test case, precision and recall are listed. Moreover, a detailed view
on the alignment results is available (Figure 4), when clicking on the alignment
icon in Figure 3.
4</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Final Remarks</title>
      <p>Automatic evaluation of matching tools is a key issue for promoting the
development of ontology matching. In this demo we have presented a web-based tool
for automatic evaluation of matching systems that is available for the research
community at any time. The major bene t of this service is to allow developers
to debug their systems, run their own evaluations, and manipulate the results
immediately in a direct feedback cycle.</p>
      <p>Acknowledgements The authors are partially supported by the SEALS project
(IST-2009-238975).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>J.</given-names>
            <surname>Euzenat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ferrara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hollink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Malaise</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Meilicke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pane</surname>
          </string-name>
          , F. Schar e,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shvaiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Spiliopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Stuckenschmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Svab-Zamazal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Svatek</surname>
          </string-name>
          , C. T. dos
          <string-name>
            <surname>Santos</surname>
            , and
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Vouros</surname>
          </string-name>
          .
          <article-title>Results of the ontology alignment evaluation initiative 2009</article-title>
          . In Ontology Matching Workshop,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>H.</given-names>
            <surname>Seddiqui</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Aono</surname>
          </string-name>
          .
          <article-title>Anchor- ood: results for OAEI 2009</article-title>
          .
          <source>In Proceedings of the ISWC 2009 workshop on ontology matching</source>
          , Washington DC, USA,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>