<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>How well does your Instance Matching system perform? Experimental evaluation with LANCE</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tzanina Saveta</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Evangelia Daskalaki</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giorgos Flouris</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irini Fundulaki</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Axel-Cyrille Ngonga Ngomo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>IFI/AKSW, University of Leipzig</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute of Computer Science-FORTH</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Identifying duplicate instances in the Data Web is most commonly performed (semi-)automatically using instance matching frameworks. However, current instance matching benchmarks fail to provide end users and developers with the necessary insights pertaining to how current frameworks behave when dealing with real data. In this paper, we present the results of the evaluation of instance matching systems using Lance, a domain-independent, schema agnostic instance matching benchmark generator for Linked Data. Lance is the rst benchmark generator for Linked Data to support semantics-aware test cases that take into account complex OWL constructs in addition to the standard test cases related to structure and value transformations. We provide a comparative analysis with benchmarks produced using the Lance framework for di erent domains to assess and identify the capabilities of state of the art instance matching systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Instance matching (IM), refers to the problem of identifying instances that
describe the same real-world object. With the increasing adoption of Semantic Web
technologies and the publication of large interrelated RDF datasets and
ontologies that form the Linked Data (LD) Cloud, a number of IM techniques adapted
to this setting have been proposed [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1,2,3</xref>
        ].
      </p>
      <p>
        Clearly, the large variety of IM techniques requires their comparative
evaluation to determine which technique is best suited for a given application.
Assessing the performance of these systems generally requires well-de ned and widely
accepted benchmarks to allow determining the weak and strong points of the
methods or systems, as well as for motivating the development of better systems
to overcome the identi ed weak points. Hence, properly designed benchmarks
help push the limit of existing systems [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7 ref8">4,5,6,7,8</xref>
        ], advancing both research and
technology.
      </p>
      <p>
        Recently Lance [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], a state-of-the-art benchmark generator for
benchmarking instance matching systems in the LD context was introduced. Lance is a
exible, generic, domain-independent and schema-agnostic benchmark generator
for IM systems. Lance supports a large variety of value, structure based and
semantics-aware transformations with varying degrees of di culty. The results
? The presented work was funded by the H2020 project HOBBIT (#688227).
of these transformations are recorded in the form of a weighted gold standard
that allows a more ne-grained analysis of the performance of instance matching
tools. Details on the di erent transformation types, our weighted gold standard
and metrics, as well as the evaluation of our system can be found in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        In the current paper, our focus lies on evaluating state-of-the-art instance
matching systems with benchmarks produced using the Lance framework. The
purpose of this evaluation is to provide further insights on the weak and strong
points of di erent IM systems, that would be complementary to the ones already
established in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In particular, we evaluate the e ect of using di erent datasets
as input to the benchmark generator module of Lance, and show that the
performance of IM systems is not only a ected by the benchmark creation process
itself, but also by the characteristics of the input dataset that was used to
generate the benchmark. For our tests, we used SPIMBENCH [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and UOBM [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
datasets.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>LANCE Approach</title>
      <p>
        Here, we give the basic features of Lance. The interested reader can nd more
details in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]:
Transformation-based test cases. Lance supports a set of test cases based
on transformations that distinguish di erent types of matching entities. Similarly
to existing IM benchmarks, Lance supports value-based (typos, date/number
formats, etc.) and structure-based (deletion of classes/properties, aggregations,
splits, etc.) test cases. Lance is the rst benchmark generator to support
semanticsaware test cases that go beyond the standard RDFS constructs and allow testing
the ability of IM systems to use the semantics of RDFS/OWL axioms to
identify matches and include tests involving instance (in)equality, class and property
equivalence and disjointness, property constraints, as well as complex class de
nitions. Lance also supports simple combination (SC) test cases (implemented
using the aforementioned transformations applied on di erent triples pertaining
to the same instance), as well as complex combination (CC) test cases
(implemented by combinations of individual transformations on the same triple).
Similarity score and ne-grained evaluation metrics. Lance provides an
enriched, weighted gold standard and related evaluation metrics, which allow a
more ne-grained analysis of the performance of systems for tests with varying
di culty. The gold standard indicates the matches between source and target
instances. In particular, each match in the gold standard is enriched with
annotations speci c to the test case that generated each pair, i.e., the type of test case
it represents, the property on which a transformation was applied, and a
similarity score (or weight) of the pair of reported matched instances that essentially
quanti es the di culty of nding a particular match. This detailed
information allows Lance to provide more detailed views and novel evaluation metrics
to assess the completeness, soundness, and overall matching quality of an IM
system on top of the standard precision/recall metrics. Thus, Lance provides
ne-grained information to support debugging and extending IM systems.
taaD iItsennog lueodM RDF Repository
      </p>
      <p>LSPRAQ irsueeQ (caSehm )ttsSInitialization Test Case Generator</p>
      <p>a Module
LSPRAQ irsueeQ I()R GReensoeurartcoer</p>
      <p>Resource
Transformation</p>
      <p>Module
Matched Instances</p>
      <p>Weight Computation Module
MATCHER</p>
      <p>SAMPLER</p>
      <p>RESCAL
High level of customization Lance provides the ability to build benchmarks
with di erent characteristics on top of any input dataset, thereby allowing the
implementation of diverse test cases for di erent domains, dataset sizes and
morphology. This makes Lance highly customizable and domain independent;
Implementation of LANCE. Lance1 is a highly con gurable instance
matching benchmark generator for Linked Data that consists of two components : (i)
an RDF repository that stores the source datasets and (ii) a test case
generator (see Figure 1). The test case generator takes as input a source dataset
and produces a target dataset that implements various test cases according to
the speci ed con guration parameters to be used for testing instance matching
tools. It consists of the Initialization, the Resource Generator and the Resource
Transformation modules.</p>
      <p>{ Initialization module reads the test case generation parameters and retrieves
by means of SPARQL queries the schema information (e.g., schema classes
and properties) from the RDF repository that will be used for producing the
target dataset.
{ The Resource Generator uses this input to retrieve instances of those schema
constructs from the RDF repository and passes those (along with the
conguration parameters) to the Resource Transformation Module.
{ The Resource Transformation module returns for a source instance ui the
transformed instance u0i and stores this in the target dataset; this module
is also responsible in producing an entry in the gold standard. Once Lance
has performed all the requested transformations, the Weight Computation
Module calculates the similarity scores of the produced matches. The
conguration parameters specify the part of the schema and data to consider
when producing the di erent test cases as well as the percentage and type
of transformations to consider. More speci cally, parameters for value-based
test cases specify the kind and severity of transformations to be applied; for
structure and semantics-aware test cases, the parameters specify the type of
transformations to be considered. The idea behind con guration parameters
is to allow one to tune the benchmark generator into producing benchmarks
1 The code of Lance is available at https://github.com/jsaveta/Lance
of varying degrees of di culty which test di erent aspects of an instance
matching tool.</p>
      <p>Lance is implemented in Java and in the current version we use OWLIM Version
2.7.3. as our RDF repository.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Experimental Results</title>
      <p>
        Settings. Our evaluation focused on demonstrating the capability of our
benchmark generator in assessing and identifying the strengths and weaknesses of
instance matching systems. For this purpose, we evaluated LogMap Version 2.4 [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
using the MoRe [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] reasoner, OtO [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and LIMES [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] running the EAGLE [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
algorithm. We chose these tools because they are prototypical working instances
of existing IM systems. Attempts to evaluate systems such as RiMOM-IM [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ],
COMA++ [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and CODI [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] with Lance were not successful due to issues from
the systems' side. We were not able to work with RiMOM-IM due to incomplete
information regarding the use of the system; COMA++ supports instance-based
ontology matching but does not aim for instance matching per se. CODI is no
longer supported by their development team. LogMap considers both schema
and instance level matching; OtO on the other hand, needs to be con gured
manually to implement instance matching tasks. The same holds for EAGLE,
which can learn speci cations and focuses on instance matching tasks only. In
order to identify strong and weak points of state-of-the-art IM systems, we tested
the tools at hand with di cult tasks in which we transform the entirety of the
source dataset to produce the target dataset. All experiments were conducted on
an Intel(R) Core(TM) 2 Duo CPU E8400 @3.00GHz with 8G of main memory
running Windows 7 (64-bit).
      </p>
      <p>
        Datasets. We used as source datasets produced by LDBC's2 SPIMBENCH [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]
and UOBM's [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] data generators. SPIMBENCH datasets are described using a
rich ontology with many di erent OWL constructs, in contrast with UOBM that
employs a simpler ontology with many object and some datatype properties. For
each generator (SPIMBENCH, UOBM) we produced two datasets, one with 10K
triples and one with 50K triples. For SPIMBENCH those triples approximately
correspond to 500 and 2.5K instances respectively and for UOBM to 2K and
10K.
      </p>
      <p>Results. Figures 2 and 3 report the results for the di erent types of test cases
and for the aforementioned datasets. In all cases, we measured recall, precision
and f-measure, along with the similarity score and standard deviation.</p>
      <p>Regarding the SPIMBENCH dataset, LogMap responds well to the
valuebased test cases having a high precision and recall (close to 0:75) but its
performance degrades when the instances are involved in semantics-aware test cases
giving low precision and recall (below 0:4). Despite of these results we claim
that LogMap performs su ciently well when faced with semantics-aware
transformations since it is called to perform a matching task for highly heterogeneous
datasets. OtO gives very good precision results for the value-based test cases but
in some cases it is not able to nd any match (recall is below 0:1).
2 LDBC Semantic Publishing Benchmark: http://ldbcouncil.org/developer/spb</p>
      <p>LogMap'10K'
Precision" Recall" f&lt;measure"</p>
      <p>EAGLE'10K'
Precision" Recall" f&lt;measure"</p>
      <p>The algorithm of EAGLE performs well when faced with syntactic
transformations. Increasing changes to the topology of the underlying RDF graphs
(the case of semantics-aware test cases) leads to a degradation of the
performance of the algorithm. The performance of EAGLE is not consistent since it is
non-deterministic and uses unsupervised learning.</p>
      <p>The second experiment that we conducted compared the similarity scores as
well as the standard deviation of the results of the systems with those of Lance
when the latter is used as a baseline. These metrics provide insights on the
ability of the systems to address the challenges proposed by Lance benchmarks.
Figures 4 and 6 give the standard deviation and similarity scores for all three
systems and for the semantics-aware test case for the 10K triples SPIMBENCH
dataset. They also show the values for Lance in order to have a baseline for
comparison. We can see that LogMap reports scores and standard deviation close
to the ones given by Lance verifying that it can address the \di cult" test cases.
EAGLE and OtO report lower similarity scores and standard deviation, meaning
that they cannot address the challenges imposed by the, harder, semantics-aware
test cases.</p>
      <p>For UOBM, we ran LogMap and OtO, but not EAGLE because we were not
able to correctly initialize it. UOBM datasets seem to be more \di cult" for
both IM systems, and this di culty stems from the dataset itself, rather than
from the transformations imposed by Lance. In particular, an important source
of di culty for the systems derives from the fact that the URIs of the instances
in the dataset look very similar to each other, so even the change of a URI can
lead to false positives or false negatives. To conclude, LogMap does not respond
well to any of the categories, but its performance is not a ected by the dataset
size. On the other hand, OtO responds better, but is a ected negatively when the
dataset gets larger. Figures 5 and 7 give the standard deviation and similarity
scores for both systems and for the structure-based test case for the 10K triples
UOBM dataset. We can also see that OtO reports scores and standard deviation
results slightly closer to the ones given by Lance than LogMap, verifying that
it can address more di cult test cases.</p>
      <p>
        In our previous evaluation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], we showed how the di erent types of
transformations a ected the performance of IM systems when tested using Lance.
The current work shows that the dataset used as a source for generating the
benchmark is also an important factor that may a ect such performance. This
conclusion is derived by the fact that, even though we used the same
parameters for the transformations in SPIMBENCH and UOBM and for all sizes, the
systems did not respond similarly in the two datasets and dataset sizes as one
might expect. This phenomenon was explained in Section 3, but note that our
conclusions do not necessarily generalize to other datasets. This is a novel
insight that we plan to exploit further by determining the dataset characteristics
that are critical for this e ect (e.g., dataset structure, URI format and scheme,
dataset domain, etc.).
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>In our previous work we introduced Lance, an instance matching benchmark
generator focusing on benchmarking instance matching systems for Linked Data.
Lance is a domain-independent, highly modular and con gurable generator that
can accept as input any linked dataset and its accompanying schema to produce
a target dataset implementing matching tasks of varying levels of di culty.</p>
      <p>
        In the current paper, we ran experiments which used benchmarks generated
by Lance to evaluate state-of-the-art IM systems. These experiments should be
viewed as an addendum to the experiments appearing in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and have provided
additional insights on the factors that a ect the performance of an IM system.
In fact, it was shown that it is not only the types (and di culty) of the
transformations imposed by Lance that a ect a system's performance, but also the
characteristics of the source dataset may play an important role.
      </p>
      <p>In the future, we plan to study further this observation by pinpointing those
characteristics of a dataset that have the most important e ect on the systems'
performance. Regarding Lance itself, we will consider extensions for spatial
and streaming data; we also intend to work with datasets that include blank
nodes thereby creating more challenging tasks for instance matching tools.
Furthermore, we plan to study the frequency of appearance of the various types of
transformations in real datasets in order to be able to propose mixes of di erent
transformations that are more realistic with respect to actual datasets.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>R.</given-names>
            <surname>Isele</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jentzsch</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer. Silk</surname>
          </string-name>
          Server -
          <article-title>Adding missing Links while consuming Linked Data</article-title>
          .
          <source>In COLD</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>A</surname>
          </string-name>
          .
          <string-name>
            <surname>-C.</surname>
          </string-name>
          <article-title>Ngonga Ngomo and Soren Auer. LIMES - A Time-E cient Approach for Large-Scale Link Discovery on the Web of Data</article-title>
          . IJCAI,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>K.</given-names>
            <surname>Stefanidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Efthymiou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Herschel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Christophides</surname>
          </string-name>
          .
          <article-title>Entity resolution in the web of data</article-title>
          . In WWW, Companion Volume,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Ontology</given-names>
            <surname>Alignment</surname>
          </string-name>
          <article-title>Evaluation Initiative</article-title>
          . http://oaei.ontologymatching.org/.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>K.</given-names>
            <surname>Zaiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Conrad</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Vater</surname>
          </string-name>
          .
          <article-title>A Benchmark for Testing Instance-Based Ontology Matching Methods</article-title>
          . In KMIS,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>B.</given-names>
            <surname>Alexe</surname>
          </string-name>
          , W.-C
          <string-name>
            <surname>Tan</surname>
            , and
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Velegrakis. STBenchmark</surname>
          </string-name>
          :
          <article-title>Towards a benchmark for mapping systems</article-title>
          .
          <source>In PVLDB</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>T.</given-names>
            <surname>Saveta</surname>
          </string-name>
          , E. Daskalaki,
          <string-name>
            <given-names>G.</given-names>
            <surname>Flouris</surname>
          </string-name>
          , et al.
          <article-title>Pushing the Limits of Instance Matching Systems: A Semantics-Aware Benchmark for Linked Data</article-title>
          .
          <source>In WWW (Companion Volume)</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>T.</given-names>
            <surname>Saveta</surname>
          </string-name>
          , E. Daskalaki,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Flouris, I. Fundulaki, and</article-title>
          <string-name>
            <given-names>A.-C.</given-names>
            <surname>Ngonga</surname>
          </string-name>
          . LANCE:
          <article-title>Piercing to the Heart of Instance Matching Tools</article-title>
          .
          <source>In ISWC</source>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. L. Ma,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qiu</surname>
          </string-name>
          , et al.
          <article-title>Towards a Complete OWL Ontology Benchmark</article-title>
          .
          <source>In ESWC</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. E.
          <string-name>
            <surname>Jimenez-Ruiz</surname>
            and
            <given-names>B. C.</given-names>
          </string-name>
          <string-name>
            <surname>Grau</surname>
          </string-name>
          . Logmap:
          <article-title>Logic-based and scalable ontology matching</article-title>
          .
          <source>In ISWC</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Romero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.C.</given-names>
            <surname>Grau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. Horrocks</given-names>
            <surname>Ian</surname>
          </string-name>
          , and E. Jimenez-Ruiz.
          <article-title>MORe: a Modular OWL Reasoner for Ontology Classi cation</article-title>
          .
          <source>In ORE</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>E.</given-names>
            <surname>Daskalaki</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Plexousakis. OtO Matching</surname>
          </string-name>
          <article-title>System: A Multi-strategy Approach to Instance Matching</article-title>
          . In CAiSE,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>A</surname>
          </string-name>
          .
          <string-name>
            <surname>-C. Ngonga Ngomo</surname>
            and
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Lyko. EAGLE</surname>
          </string-name>
          :
          <article-title>E cient Active Learning of Link Speci cations using Genetic Programming</article-title>
          .
          <source>In ESWC</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>J. Li</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>and Q.</given-names>
          </string-name>
          <string-name>
            <surname>Luo</surname>
          </string-name>
          .
          <article-title>Rimom: A dynamic multistrategy ontology alignment framework</article-title>
          .
          <source>TKDE</source>
          ,
          <volume>21</volume>
          (
          <issue>8</issue>
          ),
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>S.</given-names>
            <surname>Massmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Raunich</surname>
          </string-name>
          , D. Aumuller, P. Arnold, and
          <string-name>
            <given-names>E.</given-names>
            <surname>Rahm</surname>
          </string-name>
          .
          <article-title>Evolution of the COMA match system</article-title>
          .
          <source>Ontology Matching</source>
          ,
          <volume>49</volume>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>J. Euzenat</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ferrara</surname>
            ,
            <given-names>W. R. van Hage</given-names>
          </string-name>
          , et al.
          <article-title>Results of the ontology alignment evaluation initiative 2011</article-title>
          .
          <string-name>
            <surname>In</surname>
            <given-names>OM</given-names>
          </string-name>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>