<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>IBSE: An OWL Interoperability Evaluation Infrastructure</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rau´l Garc´a-Castro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Asuncio´n Go´mez-Pe´rez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jesu´s Prieto-Gonza´lez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ontology Engineering Group, Departamento de Inteligencia Arti cial. Facultad de Informa ́tica, Universidad Polite ́cnica de Madrid</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The technology that supports the Semantic Web presents a great diversity and, whereas all the tools use different types of ontologies, not all of them share a common knowledge representation model, which may pose problems when they try to interoperate. The Knowledge Web European Network of Excellence is organizing a benchmarking of interoperability of ontology tools using OWL as interchange language with the goal of assessing and improving tool interoperability. This paper presents the development of IBSE, an evaluation infrastructure that allows executing automatically the benchmarking experiments and provides an easy way of analysing the results. Thus, including new tools into the evaluation infrastructure will be simple and straightforward.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>The technology that supports the Semantic Web presents a great diversity and is
growing in parallel with the Semantic Web. This technology appears in different forms
(ontology development tools, ontology repositories, ontology alignment tools, reasoners,
etc.) and, whereas all these tools use different kinds of ontologies, not all of them share
a common knowledge representation model.</p>
      <p>
        This diversity in the representation formalisms of the tools causes problems when
the tools try to interoperate, that is, to interchange information and use the information
that has been exchanged [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This is so because the tools require translating their
ontologies from their own knowledge models to a common one and vice versa, even when
using standard APIs for managing ontologies in the common knowledge model.
      </p>
      <p>OWL1 is the language recommended by the World Wide Web Consortium for de
ning and instantiating ontologies; therefore, to use OWL as a language for interchanging
ontologies now seems the right choice. But interoperability between Semantic Web
tools using OWL is unknown, and to evaluate to what extent one tool is able to
interchange ontologies with others is quite dif cult as there are no means available for
performing this task easily.</p>
      <p>To this end, the Knowledge Web2 European Network of Excellence is organizing the
benchmarking of interoperability of ontology tools using OWL as interchange language
with the goal of assessing and improving the interoperability of the tools.
1 http://www.w3.org/2004/OWL/</p>
      <sec id="sec-1-1">
        <title>2 http://knowledgeweb.semanticweb.org/</title>
        <p>To allow as much tools as possible to participate in the benchmarking and to
minimise the effort devoted to this participation, we have developed IBSE, an
evaluation infrastructure that automatically executes the experiments, offers a simple way of
analysing the results, and permits smoothly including new tools into the infrastructure.</p>
        <p>The results of a rst execution of the experiment carried out with two tools, Jena, an
ontology repository, and WebODE, and ontology development tool, are also presented.</p>
        <p>This paper is structured as follows: Section 2 presents other interoperability
evaluation initiatives. Section 3 introduces the OWL Interoperability Benchmarking and
Section 4 the experiment to be performed in this benchmarking activity. Section 5 presents
the set of ontologies to use as input for the experiment. Section 6 deals with the OWL
ontologies used to represent the benchmarks and their results. Section 7 describes how
the evaluation infrastructure has been implemented and how it can be used. Section 8
provides an example of executing the experiment with Jena and WebODE. Finally,
Section 9 draws the conclusions from this work and proposes future lines of work.
2</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        This section presents two other initiatives that deal with interoperability evaluations:
EON 2003 Workshop. The central topic of the Second International Workshop on
Evaluation of Ontology-based Tools was the evaluation of ontology development
tools interoperability [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In this workshop, the participants were asked to model
ontologies with their ontology development tools and to perform different tests for
evaluating tool import, export and interoperability. In these experiments:
 There was no constraint regarding the use of the interchange language; of the
experiments carried out only two used OWL as interchange language.
 Each experiment was performed with a different procedure; hence the results
obtained in that workshop did not provide general recommendations, only
speci c ones for each ontology development tool participating.
      </p>
      <p>
        RDF(S) Interoperability Benchmarking. A benchmarking of the interoperability of
ontology development tools using RDF(S) as interchange language3 has been
organised in Knowledge Web, before we started the benchmarking presented in this
paper [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>In the RDF(S) Interoperability Benchmarking the experimentation and analysis of
the results were performed manually. This has the advantage of obtaining high
detailed results, being easier to diagnose problems in the tools and so to improve them.
But the manual execution and analysis of the results also makes the
experimentation costly. Tools developers have often automated the execution of the benchmark
suites but not always. Furthermore, the results obtained may be in uenced by
human mistakes and they depend on the people performing the experiments and on
their expertise with the tools.</p>
      <sec id="sec-2-1">
        <title>3 http://knowledgeweb.semanticweb.org/benchmarking interoperability/</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>OWL Interoperability Benchmarking</title>
      <p>
        In the OWL Interoperability Benchmarking we have followed the Knowledge Web
benchmarking methodology [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for ontology tools, a methodology used before for
benchmarking the interoperability of ontology development tools using RDF(S) as the
interchange language [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and for benchmarking the performance and the scalability of
ontology development tools [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>The two main goals that we want to achieve are:
 To assess and improve the interoperability of ontology development tools using
OWL as the interchange language. This would permit learning about the current
interoperability of the tools and maximizing the knowledge that these tools can
interchange while minimizing the information addition or loss.
 To identify the fragment of knowledge that ontology development tools can
share using OWL as the interchange language. As this fragment becomes larger,
more expressive ontologies can be interchanged among ontology development tools.</p>
      <p>The main changes to perform with regard to the RDF(S) Interoperability
Benchmarking presented above are the following:
 To broaden the scope of the benchmarking. While the RDF(S)
Interoperability Benchmarking targeted at ontology development tools (even though ontology
repositories also participated), this time we consider any Semantic Web tool able
to read and write ontologies from/to OWL les (ontology repositories, ontology
merging and alignment tools, reasoners, ontology-based annotation tools, etc.).
 To automate the experiment execution and the analysis of the results. In the
OWL Interoperability Benchmarking we sacri ce a higher detail in results to avoid
the experiments being conducted by humans. However, full automation of the result
analysis is not possible since it requires a person to interpret them; nevertheless,
the evaluation infrastructure automatically generates different visualizations and
summaries of the results in different formats (such as HTML or SVG) to draw some
conclusions at a glance. Of course, an in-depth analysis of these results will still
be needed for extracting the cause of the problems encountered and improvement
recommendations and the practices performed by developers.
 To de ne benchmarks and results using ontologies. The automation mentioned
above requires that both benchmarks and results be machine-processable so we
have represented them using ontologies. Instances of these ontologies will include
the information needed to execute the benchmarks and all the results obtained in
their execution. This also allows having different prede ned benchmark suites and
execution results available in the Web, which can be used by anyone to, for
example, classify and select tools according to their results.
 To use any group of ontologies as input for the experiments. Executing
benchmarks with no human effort can provide further advantages. We have implemented
the evaluation infrastructure to generate benchmark descriptions from any group
of ontologies in RDF(S) or OWL and to execute these benchmarks. Thus, we can
easily perform different experiments with large numbers of ontologies,
domainspeci c ontologies, systematically-generated ontologies, etc.</p>
    </sec>
    <sec id="sec-4">
      <title>Experiment to be Performed</title>
      <p>The experiment to be performed consists in measuring the interoperability of the tools
participating in the benchmarking through the interchange of ontologies from one tool
to another. From these measurements, we will extract the interoperability between the
tools, the causes of problems, and improvement recommendations.</p>
      <p>Of the different ways that Semantic Web tools have to interoperate, we only consider
interoperability when interchanging ontologies using OWL. Therefore, the
functionalities affecting the results are the OWL importers and exporters of the tools. Also, with no
human intervention, we can only access tools through application programming
interfaces (APIs) and the operations performed to access them must be supported by most
of the Semantic Web tools. Therefore, the only operations to be performed by a tool
should be: to import one ontology from a le (to read one le with an ontology and to
store this ontology in the tool's knowledge model), and to export one ontology into a
le (to write into a le an ontology stored in the tool knowledge model).</p>
      <p>During the experiment, a common group of benchmarks will be executed, each
benchmark describing one input OWL ontology that has to be interchanged between a
single tool and the others.</p>
      <p>Each benchmark execution comprises two sequential steps, shown in Figure 1.
Starting with a le that contains an ontology, the rst step (Step 1) consists in importing the
le with the ontology into the origin tool and then exporting such ontology into a le
using the interchange language. The second step (Step 2) consists in importing the le
with the ontology (exported by the origin tool) into the destination tool and then
exporting such ontology into another le.</p>
      <p>In these steps, there is not a common way for the tools to check the results of
importing the ontologies, we just have the results of combining the import and export
operations (the les exported by the tools) and we consider these two operations as an
atomic operation. Therefore, it must be noted that if a problem arises in one of these
steps, we cannot know whether the problem was caused when importing or when
exporting the ontology since we do not know the state of the ontology inside each tool.</p>
      <p>After a benchmark execution, the results obtained from the ontology described in
the benchmark are three different states, namely, the original ontology, the intermediate
ontology exported by the rst tool, and the nal ontology exported by the second tool.
From these results, we de ne the evaluation criteria for a benchmark execution. These
evaluation criteria will be considered in Step 1, Step 2, and in the whole interchange
(Step 1 + Step 2), and are the following:
 Execution (OK/FAIL/C.E./N.E.) informs about the correct execution of a step or the
whole interchange. Its value is OK if the step or the whole interchange is carried out
with no execution problem; FAIL if the step or the whole interchange is carried out
with some execution problem; C.E. (Comparer Error) if the comparer launches an
exception when comparing the origin and nal ontologies; and N.E. (Not Executed)
if the second step is not executed because the execution on the rst step failed.
 Information added or lost informs about the information added to or lost from
the ontology in terms of triples in each step or in the whole interchange. We can
know the triples added or lost in Step 1, in Step 2, and in the whole interchange
by comparing the origin ontology with the intermediate ontology, the intermediate
ontology with the nal one, and the original with the nal ontology, respectively.
 Interchange (SAME/DIFFERENT/NO) informs whether the ontology has been
interchanged correctly with no addition or loss of information. From the previous
basic measurements, we can de ne Interchange as a derived measurement that is
SAME if Execution is OK and Information added and Information lost are void;
DIFFERENT if Execution is OK but Information added or Information lost are not
void; and NO if Execution is FAIL, N.E. or C.E..</p>
      <p>Although this section provides a theoretical description of the experiment to
perform in the benchmarking, carrying this experiment out requires coping with some other
issues such as how to nd an appropriate set of ontologies to use as input for the
experimentation, how to de ne the way of representing the benchmarks and the results
in OWL and, nally, how to develop the evaluation infrastructure. The next sections
describe the approach followed in each of these issues.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Ontology Dataset</title>
      <p>
        As we mentioned above, any group of ontologies could be used as input for the
experiment. For example, ontologies synthetically generated such as the Lehigh University
Benchmark (LUBM) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] or the University Ontology Benchmark (UOB) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], a group of
real ontologies in a certain domain, or the OWL Test Cases4 developed by the W3C
Web Ontology Working Group could be employed.
      </p>
      <p>Nevertheless, being our goal to improve interoperability, these ontologies could
complement our experiments even though they were designed with speci c goals and
requirements such as that of performance or correctness evaluation. In our case, we
aim to evaluate interoperability with simple OWL ontologies that, although they do not
cover exhaustively the OWL speci cation, allow highlighting problems in the tools.</p>
      <p>
        Therefore, the ontologies used in the experiment are those de ned for the OWL Lite
Import Benchmark Suite, described in detail in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. This benchmark suite is intended to
evaluate the OWL import capabilities of the ontology development tools by checking
      </p>
      <sec id="sec-5-1">
        <title>4 http://www.w3.org/TR/owl-test/</title>
        <p>the import of simple combinations of components of the OWL Lite knowledge model.
It is composed of 82 benchmarks and is available in the Web5.</p>
        <p>The ontologies composing the benchmark suite are serialized using the RDF/XML
syntax as this is the syntax most widely employed by tools when exporting and
importing to/from OWL. Since the RDF/XML syntax allows serializing ontology components
in different ways while maintaining the same semantics, the benchmark suite includes
two groups of benchmarks: one to check the import of the different combinations of the
OWL Lite vocabulary terms, and another to check the import of OWL ontologies with
the different variants of the RDF/XML syntax.
6</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Representation of Benchmarks and Results</title>
      <p>This section de nes the two OWL ontologies used in the OWL Interoperability
Benchmarking. The benchmarkOntology6 ontology de nes the vocabulary that represents the
benchmarks to be executed while the resultOntology7 ontology de nes the vocabulary
that represents the results of a benchmark execution. These ontologies are lightweight,
as their main goal is to be user-friendly.</p>
      <p>Next, the classes and properties that these ontologies contain are presented. All the
datatype properties have as range xsd:string with the exception of timestamp, whose
range is xsd:dateTime.
benchmarkOntology. The Document class represents a document containing one
ontology. A document can be further described with the following properties having
Document as domain: documentURL (the URL of the document), ontologyName
(the ontology name), ontologyNamespace (the ontology namespace), and
representationLanguage (the language used to implemented the ontology).</p>
      <p>The Benchmark class represents a benchmark to be executed. A benchmark can be
further described with the following properties having Benchmark as domain: id
(the benchmark identi er); usesDocument (the document that contains one
ontology used as input); interchangeLanguage (the interchange language used); author
(the benchmark author); and version (the benchmark version number).
resultOntology. The Tool class represents a tool that has participated as origin or
destination of an interchange in a benchmark. A tool can be further described with
the following properties having Tool as domain: toolName (the tool name), and
toolVersion (the tool version number).</p>
      <p>The Result class represents a result of a benchmark execution. A result can be
further described with the following properties having Result as domain: ofBenchmark
(the benchmark to which the result corresponds); originTool (the tool origin of the
interchange); destinationTool (the tool destination of the interchange); execution,
executionStep1, executionStep2 (if the whole interchange, the rst and the second
5 http://knowledgeweb.semanticweb.org/benchmarking interoperability/owl/import.</p>
      <p>html
6 http://knowledgeweb.semanticwe.org/benchmarking interoperability/owl/
benchmarkOntology.owl</p>
      <sec id="sec-6-1">
        <title>7 http://knowledgeweb.semanticwe.org/benchmarking interoperability/owl/</title>
        <p>resultOntology.owl
steps are carried out without any execution problem, respectively); interchange,
interchangeStep1, interchangeStep2 (if the ontology has been interchanged correctly
from the original tool to the destination tool, in the rst and second steps with
no addition or loss of information, respectively); informationAdded,
informationAddedStep1, informationAddedStep2 (the triples added in the whole interchange,
the rst and the second steps, respectively); informationRemoved,
informationRemovedStep1, informationRemovedStep2 (the triples removed in the whole
interchange, the rst and the second steps, respectively); and nally, timestamp (the
date and time when the benchmark is executed).</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7 Implementation of the Evaluation Infrastructure</title>
      <p>IBSE (Interoperability Benchmark Suite Executor) is the interoperability evaluation
infrastructure that automates the execution of the experiments to be performed in the
OWL Interoperability Benchmarking and that provides HTML summarized views of
the obtained results. This is performed in the following three consecutive steps:
1. To generate machine-readable benchmark descriptions from a group of
ontologies. In this step, a RDF le is generated including one benchmark for each
ontology from a group of ontologies located in a URI and the vocabulary of the
benchmarkOntology ontology. This can be skipped if benchmark descriptions are
already available.
2. To execute the benchmarks. In this step, each benchmark described in the RDF
le is executed between each pair of tools, being one tool the origin of the
interchange and the other the destination of the interchange. The results are stored in a
RDF le using the vocabulary of the resultOntology ontology.</p>
      <p>Once we have the original, intermediate and nal les with the corresponding
ontologies, we extract the execution results by comparing each of these ontologies as
shown in Section 4. This comparison and its output depend on an external ontology
comparer. The current implementation uses the diff methods of a RDF(S) comparer
(rdf-utils8 version 0.3b) and of an OWL comparer (KAON2 OWL Tools9 version
0.27). Nevertheless, the implementation permits inserting other comparers.
3. To generate HTML les with different visualizations of the results. In this step,
three HTML les are generated. One of them shows the original, intermediate and
nal ontologies obtained in a benchmark execution whereas the other two
summarize the Execution, Interchange, Information added, and Information lost results
contained in the RDF result les. These two HTML les show, for each benchmark,
the results of the nal interchange and of the intermediate steps (Step 1 and Step
2), as Figure 2 illustrates. The only difference between these two visualizations is
that one depicts the count of triples inserted or removed to provide a quick
summary of the results, the other, all the triples that have changed. We also distinguish
between changes in the ontology formal model and in its documentation (its
annotation properties). Regarding the triples inserted or removed, the tables show clearly</p>
      <sec id="sec-7-1">
        <title>8 http://wymiwyg.org/rdf-utils/</title>
      </sec>
      <sec id="sec-7-2">
        <title>9 http://owltools.ontoware.org/</title>
        <p>Step 1 (Jena) Step 2 (WebODE)
Interchange=SAME Interchange=DIFFERENT
Execution=OK Execution=OK</p>
        <p>Inserted: Annotations 1, Others: 0
the annotation properties both in the triple count by counting them separately and
in the triple list, by showing them in a different colour.</p>
        <p>The only requirements for executing the evaluation infrastructure are to have a Java
Runtime Environment and the tools that participate in the experiment installed, with
their corresponding implementations in the IBSE evaluation infrastructure. The
benchmarking web page contains links to the source code and binaries of IBSE10 and links to
the RDF le with the description of the benchmarks to be executed in each tool11.</p>
        <p>The only operation that a tool has to perform to participate in the experiment, as seen
in Section 4, is to import an ontology from a le and to export the imported ontology
into another le. To insert a new tool in the evaluation infrastructure only one method
from the ToolManager interface has to be implemented: void ImportExport(String
importFile, String exportFile, String ontologyName, String namespace, String language).
This method receives as input parameters the location of the le with the ontology to
be imported, the location of the le where the exported ontology has to be written, the
name of the ontology, the namespace of the ontology, and the representation language
of the ontologies respectively. This method has already been implemented for the Jena12
ontology repository and for the WebODE13 ontology development tool.
8</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Execution Sample</title>
      <p>We have performed a sample experiment using Jena and WebODE. After running the
experiment, we obtained four sets of results from all the possible ways of interchanging
10 http://knowledgeweb.semanticweb.org/benchmarking interoperability/ibse/
11 http://knowledgeweb.semanticweb.org/benchmarking interoperability/owl/OIBS.rdf
12 http://jena.sourceforge.net/
13 http://webode.dia.fi.upm.es/
ontologies between these tools: Jena with itself, Jena with WebODE, WebODE with
Jena, and WebODE with itself.</p>
      <p>The HTML les with the interoperability results of Jena and WebODE provide not
just data about the interoperability of the tools but some hints on improving the
evaluation infrastructure. Table 1 presents a summary of this rst set of results including a
richer classi cation of these results after some in-depth analyses.
We are currently organizing the benchmarking of interoperability of ontology tools
using OWL as interchange language. To support the automatic execution of the
experiment and the analysis of the results we have developed IBSE, the evaluation
infrastructure presented in this paper, which can be used either by the benchmarking participants
or by anyone interested in evaluating the interoperability of their tools.</p>
      <p>At the time of writing this paper, we do not have a de nitive list of the tools
participating in the benchmarking and, on the other hand, the evaluation over the tools has not
started. Therefore, we do not have conclusive results from any tool. The evaluation
infrastructure is, nevertheless, currently under development and these rst results provide
valuable feedback for continuing the work.</p>
      <p>One change that should be implemented is the detection of comparer errors,
although this can be quite comparer-speci c, and the inclusion of these errors into the
results. To facilitate the analysis we also need to enhance result visualization by
providing graphical visualizations and statistics of the whole results.</p>
      <p>In the case of tools whose internal knowledge model does not correspond to the
interchange language, the analysis of the results is not straightforward and sometimes
triples are inserted or removed as it was intended by their developers, but this correct
functioning is dif cult to evaluate or to distinguish in the current results.</p>
      <p>Another issue that is not clear enough with the current results is which components
or combinations of components are interchanged correctly and which are not. This issue
could be solved by extending the description of the ontologies with this data.</p>
      <p>The evaluation infrastructure presented in this paper also considers the current
evolution of OWL. It allows evaluating OWL 1.1 implementations either by using any set of
OWL 1.1 ontologies to evaluate their interoperability or by using the current ontology
dataset to evaluate their backward compatibility with the OWL recommendation.</p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgments</title>
      <p>This work is partially supported by a FPI grant from the Spanish Ministry of
Education (BES-2005-8024), by the IST project Knowledge Web (IST-2004-507482) and
by the CICYT project Infraestructura tecnolo´ gica de servicios sema´nticos para la web
sema´ntica (TIN2004-02660). Thanks to Rosario Plaza for reviewing the grammar of
this paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. IEEE-STD-610
          <source>: ANSI/IEEE Std 610</source>
          .
          <fpage>12</fpage>
          -
          <lpage>1990</lpage>
          .
          <article-title>IEEE Standard Glossary of Software Engineering Terminology</article-title>
          . IEEE (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Sure</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corcho</surname>
          </string-name>
          , O., eds.
          <source>: Proceedings of the 2nd International Workshop on Evaluation of Ontology-based Tools (EON2003)</source>
          .
          <article-title>Volume 87 of CEUR-WS</article-title>
          .,
          <string-name>
            <surname>Florida</surname>
          </string-name>
          , USA (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Garc</surname>
          </string-name>
          <article-title>´a-</article-title>
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sure</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zondler</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corby</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <article-title>Prieto-Gonza´lez</article-title>
          , J.,
          <string-name>
            <surname>Bontas</surname>
            ,
            <given-names>E.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nixon</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mochol</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <source>D1.2.2.1</source>
          .
          <article-title>1 Benchmarking the interoperability of ontology development tools using RDF(S) as interchange language</article-title>
          .
          <source>Technical report, Knowledge Web</source>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Garc</surname>
          </string-name>
          <article-title>´a-</article-title>
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maynard</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wache</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Foxvog</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <article-title>Gonza´lez-</article-title>
          <string-name>
            <surname>Cabero</surname>
          </string-name>
          , R.:
          <source>D2</source>
          .
          <article-title>1.4 specication of a methodology, general criteria and benchmark suites for benchmarking ontology tools</article-title>
          .
          <source>Technical report, Knowledge Web</source>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Garc</surname>
          </string-name>
          <article-title>´a-</article-title>
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <article-title>Go´mez-Pe´rez, A.: Guidelines for Benchmarking the Performance of Ontology Management APIs</article-title>
          .
          <source>In: Proceedings of the 4th International Semantic Web Conference (ISWC2005)</source>
          . Number 3729 in LNCS, Galway, Ireland, Springer-Verlag (
          <year>2005</year>
          )
          <volume>277</volume>
          
          <fpage>292</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          , He in, J.:
          <article-title>LUBM: A Benchmark for OWL Knowledge Base Systems</article-title>
          .
          <source>Journal of Web Semantics</source>
          <volume>3</volume>
          (
          <issue>2</issue>
          ) (
          <year>2005</year>
          )
          <volume>158</volume>
          
          <fpage>182</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qiu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Towards a complete OWL ontology benchmark</article-title>
          . In Sure, Y.,
          <string-name>
            <surname>Domingue</surname>
          </string-name>
          , J., eds.
          <source>: Proceedings of the 3rd European Semantic Web Conference (ESWC</source>
          <year>2006</year>
          ).
          <article-title>Volume 4011 of LNCS</article-title>
          .,
          <string-name>
            <surname>Budva</surname>
          </string-name>
          , Montenegro (
          <year>2006</year>
          )
          <volume>125</volume>
          
          <fpage>139</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>David</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <article-title>Garc´a-</article-title>
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , Go´
          <fpage>mez</fpage>
          -Pe´rez, A.:
          <article-title>De ning a benchmark suite for evaluating the import of OWL lite ontologies</article-title>
          .
          <source>In: Proceedings of the OWL: Experiences and Directions 2006 workshop (OWL2006)</source>
          , Athens, Georgia, USA (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>