=Paper= {{Paper |id=None |storemode=property |title=One Query to Bind Them All |pdfUrl=https://ceur-ws.org/Vol-782/HerzigAndTran_COLD2011.pdf |volume=Vol-782 |dblpUrl=https://dblp.org/rec/conf/semweb/HerzigT11 }} ==One Query to Bind Them All== https://ceur-ws.org/Vol-782/HerzigAndTran_COLD2011.pdf
                One Query to Bind Them All

                          Daniel M. Herzig, Thanh Tran

                                   Institute AIFB
                          Karlsruhe Institute of Technology
                             76128 Karlsruhe, Germany
                           {herzig, duc.tran}@kit.edu



      Abstract. Recently, SPARQL became the standard language for query-
      ing RDF data on the Web. Like other formal query languages, it applies a
      Boolean-match semantics, i.e. results adhere strictly to the query. Thus,
      queries formulated for one dataset can not easily be reused for querying
      other datasets. If another target dataset is to be queried, the queries need
      to be rewritten using the vocabulary of the target dataset, while preserv-
      ing the captured information need. This is a tedious manual task, which
      requires the knowledge of the target vocabulary and often relies on com-
      putational expensive techniques, such as mapping, data consolidation
      or reasoning methods. Given the scale as well as the dynamics of Web
      datasets, even automatic rewriting is often infeasible. In this paper, we
      elaborate on a novel approach, which allows to reuse existing SPARQL
      queries adhering to one dataset to search for entities in other dataset,
      which are neither linked nor otherwise integrated beforehand. We use the
      results returned by the given seed query, to construct an entity relevance
      model (ERM), which captures the content and the structure of relevant
      results. Candidate entities from the target dataset are obtained using
      existing keyword search techniques and subsequently ranked according
      to their similarity to the ERM. During this ranking task, we compute
      mappings between the structure of the ERM and of the candidates on-
      the-fly. The effectiveness of this approach is shown in experiments using
      large-scale datasets and compared to a keyword search baseline.


1   Introduction

Publishing structured data on the Web according to the Linked Data princi-
ples has been advocated by the Linked Data movement and the Semantic Web
community and recently also receives an increasing support from companies
like Yahoo!, Google, and Facebook, and also from governmental institutions.
The amount of Linked Data on the Web increases rapidly and is now in the
order of several billion RDF triples. In order to query these structured data
effectively, SPARQL an expressive, formal query language has been developed,
which became the standard as recommended by the W3C in 2008 [1]. How-
ever, like other formal query languages, e.g. SQL for relational databases, this
query language applies a Boolean-match semantics, which means that results to
a query are crisp and strictly adhere to the specified constraints in the query.
2

As a consequence, these queries are bound to specific datasets, respectively the
corresponding vocabularies. Whereas the database community relies mainly on
query rewriting [2], the answer of the Linked Data community to this problem
is (1) interlinking datasets manually and with the help of mapping and consoli-
dation techniques [3], (2) promoting the reuse of vocabularies and (3) exploiting
the semantics of RDF for reasoning [4]. All these data integration approaches
require high upfront investments, given the scale as well as the heterogeneity of
the Web datasets, before the actual querying across datasets is possible. Also
most of these integration steps need to be performed again, if the underlying
data changes, which is frequently the case in the dynamic data Web scenario.
Another way to bridge this schema- and data-mismatch is to apply information
retrieval techniques, in particular keyword search, which crosses the gap of two
datasets by simply ignoring the structure and applying the ‘bag of words’ notion
not just on webpages, but also for data retrieval [5, 6].
    In this paper, we elaborate on a novel approach, which allows to search for
entities in a target dataset immediately without any prior integration using a
star-shaped SPARQL query adhering to the vocabulary of another dataset. This
approach combines the flexibility of unstructured information retrieval solutions
with nature of database-style querying by incorporating the structure of the
underlying data. The idea is to start with a structured seed query specified for
one particular data source. Based on the content as well as the structure of
results obtained from this data source, we construct an Entity Relevance Model
(ERM) that can be seen as an abstract representation of relevant results. Instead
of relying on mappings for rewriting the structured seed query, we simply treat
the seed query as a keyword query and submit it against other data sources to
obtain additional results. Then, we create mappings between the structure of
candidate results and the structure of the ERM during runtime, which are used
for the subsequent matching and ranking. Candidates matching more closely to
the content as well as the structure captured by the ERM are ranked higher.
This way, we can use unstructured keywords for querying, but at the same time,
incorporate structure information into matching and ranking that are part of
the search process.
    We investigate the effectiveness of our approach in experiments using DBpe-
dia as the source dataset and an RDF representation of IMdb as target dataset.
Our approach is not limited to these datasets, but general enough to be ap-
plied to any datasets. However, the scenario involving DBpedia illustrates also
an important use case, because DBpedia as the nucleus of the Web of data is
probably the best known dataset and many users already have SPARQL queries
or know how to create such queries for DBpedia. In addition many applications
are build on top of DBpedia, which will also benefit, if the underlying retrieval
incorporates entities from other datasets on-the-fly.
   Contributions. The contributions of this work can be summarized as fol-
lows: (1) We propose a novel approach for searching Web data sources that
does not rely on upfront data integration, but uses an Entity Relevance Model
(ERM) for searching as well as for computing mappings on-the-fly in line with
                                                                                   3

the pay-as-you-go data integration [7] paradigm. (2) We provide one specific
implementation of this approach, show how an ERM can be constructed and
applied to rank search results exploiting mappings created during runtime. (3)
In experiments, we investigate the feasibility and effectiveness of our approach
and compare it to a common keyword search baseline.
    Outline. Section 2 defines the research problem and gives an overview and
briefly sketches our new approach. This approach of relevance based on-the-fly
mappings is presented in detail in Section 3. Evaluation results are presented in
Section 4. Section 5 discussed the related work before we conclude in Section 6.


2     Overview
In this section we define the setting of the addressed problem, present existing
solutions, and briefly sketch the framework we propose to deal with this problem.

2.1   Data on the Web
The problem we address is situated in a Web data scenario. Therefore, the kind
of Web data that is of most interest is RDF data. For reasons of simplicity and
generality, we employ a generic graph-based data model that omits specific RDF
features such as blank nodes. In this model, entity nodes are RDF resources,
literal nodes correspond to RDF literals, attributes are RDF properties, and
edges stand for RDF triples.
    Data Graph. The data graph is a directed and labeled graph G = (N, E).
The set of nodes N is a disjoint union of entities NE and literals NL , i.e. N =
NE ] NL . Edges E can be conceived as a disjoint union E = EE ] EL of edges
representing connections between entities, i.e. a(e1 , e2 ) ∈ EE , iff e1 , e2 ∈ NE ,
and connections between entities and literals, i.e. a(e1 , v) ∈ EL , iff e1 ∈ NE and
v ∈ NL . Given this graph structure, we define the set of edges including the
targeted nodes A(e) = {a(e, ·) ∈ E} as the description of the entity e ∈ NE , and
each member a(e, ·) ∈ A(e) is called an attribute of e. Thus, we in fact neglect
the distinction between EE and EL and simply refer to all edges as attributes.
The set of distinct attribute labels of an entity e, i.e. A0 (e) = {a|a ∈ A(e)}, is
called the model of e.

2.2   Research Problem
Given this model of Web data, structured queries can be specified to search over
such datasets. The most commonly used language for querying RDF data on
the Web is SPARQL [1]. One essential feature of SPARQL is the Basic Graph
Pattern (BGP), which conceptually resembles the notion of conjunctive queries
that constitute an important fragment of other widely used structured query
languages such as SQL. Basically, a BGP is a set of conjunctive triple patterns,
each of the form predicate(subject, object). They represent patterns because ei-
ther predicate, subject or object might be a variable, or is specified explicitly
4

(as a constant). Answering these queries amounts to the task of graph pat-
tern matching, where subgraphs in the data graph matching the query pattern
are returned as results. Predicates are matched against attributes in the data,
whereas bindings to subjects and objects in the query are entities and literal
values, respectively.
    One particular form of BGP with high importance are so-called entity queries.
Essentially, they are star-shaped queries with the node in the center of the star
representing the entity (entities) to be retrieved. Figure 3 provides several ex-
amples. According to a recent Web query logs study performed by Yahoo! re-
searchers, this type of queries is most common on the Web [8] and also the
analysis of SPARQL logs showed that it is also the most common type used in
the Semantic Web context. Further, most of the current Semantic Web search
engines such as Sig.ma 1 and Falcons [5] focus on answering these queries. For
the sake of clarity, we also focus on this type of queries in this paper to illustrate
the main ideas underlying our approach.
    Problem. The problem is finding relevant entities in a set of target datasets
Dt given a source dataset Ds and an entity query qs adhering to the vocabulary
of Ds . Clearly, if there are no syntactical differences at the level of schema and
data as discussed above, then qs can directly be used to retrieve information from
Dt . However, given Web data are heterogeneous in that sense, the following two
strategies can be applied.
    Baseline KW. The first and most widespread solution to this end is to use
keyword search over so called ‘bag-of-words’ representations of entities. That
is, the description of an entity is simply a set of terms. A query is also a set
of terms, which is then matched against the flat term-based representation of
data. This unstructured approach bridges differences at the level of schemas and
data simply by ignoring the fine-grained semantics and structure available in the
entity descriptions, as illustrated in Fig. 1. Clearly, this approach is simple but
also flexible in the sense that the same keyword query specified for Ds can also
be used for Dt because results from Dt can be obtained when there are matches
at the level of terms.


                                                                   director	
  rainer	
  werner	
  fassbinder	
  released	
  1982	
  type	
  film	
                                                (2)	
  
     db:Rainer_Werner_Fassbinder	
  
                                                      (3)	
             i:Btle	
                   Veronika	
  Voss	
  
                                                                                                                            e1	
  
                                                                                                                             	
  	
  	
  	
  	
  	
  	
  Btle	
  veronika	
  voss	
  
                         db:director	
                             i:director	
                                                director	
  rainer	
  werner	
  
                                                          e1	
                       Rainer	
  Werner	
  Fassbinder	
  
                                                                                                                                fassbinder	
  released	
  
                     ?x	
                                                i:released	
                           1982	
                     1982	
  
 db:released	
                type	
  
                                                                    i:Btle	
            Schindlers	
  Liste	
  (1994)	
  
                                                                                                                            e2	
  
                                                                                                                            	
  	
  	
  	
  	
  	
  	
  	
  	
  Btle	
  schindlers	
  liste	
  
       1982	
                 db:Film	
                            i:director	
  
                                                          e2	
                          Spielberg,	
  Steven	
  (I)	
         1994	
  director	
  spielberg	
  
                                            (1)	
                                type	
                                        steven	
  i	
  type	
  movie	
  
    DBpedia	
  db:	
                                     IMDB	
  i:	
                                         i:movie	
  




Fig. 1. Baseline: A structured query (1) transformed into a keyword query (2) and
matched against bag of word representations of entities (3).

1
      http://sig.ma/
                                                                                             5

    Our approach. In this paper, we present a framework to address this prob-
lem of searching for entities in Web data using on-the-fly mappings computed in
a pay-as-you-go fashion based on entity relevance models (ERM). This frame-
work is instantiated involving the following three steps. (1) First, we compute
an ERM from the results returned from the source dataset Ds using qs . (2) Sec-
ond, we present a light-weight on-the-fly integration technique, which maps the
structure of result candidates of the target datasets Dt to the structure of the
ERM. (3) Finally, the result candidates are ranked according to their similarity
to the ERM using the mappings computing during runtime.


3     Entity Search Over Web Data
In this section, we present how the entity relevance model is constructed and
discuss how this model can be exploited for ranking and relevance-based on-the-
fly data integration.

3.1    Entity Relevance Model
We aim to build a model that captures the structure and content of entities
relevant to the information need, which is expressed in the seed entity query qs .
This proposed model is called the Entity Relevance Model (ERM). The model
is based on the notion of structured relevance model that has been proposed
before [9].
    The ERM is a composite model that consists of several, atomic language
models[10] and is constructed as follows. Let qs be the query submitted against
the source dataset Ds , and Rs = {e1 , . . . ei , . . . em } be the set of m entities
obtained for this query. Across all m entities, we observe n distinct attribute
labels, a1s . . . ajs . . . ans . For each observed attribute label ajs , we add a field ajs to
                                                                              j
the ERM , and create an attribute-specific language model ERM as to instantiate
                                              ajs
this field. This atomic model ERM (w) specifies the probability of w occurring
in the field ajs , for every word w ∈ V , where V is the vocabulary of all words.
                j
The ERM as (w) is computed from the entity descriptions ajs ∈ A(e), ∀e ∈ Rs .
                                       j
In particular, every ERM as (w) captures the value v (i.e. the content) of the
attributes with label ajs and is defined as follows:
                                                P      P
                                       ajs        e∈Rs    v∈ajs n(w, v)
                                 ERM (w) = P             P                                  (1)
                                                    e∈Rs    v∈ajs |v|

      where n(w, v) denotes the number of occurrences of word w in the content
value v of attribute ajs and |v| the length of v. Note that the outer sum goes over
the entities e ∈ Rs returned by qs and the inner sum goes over all values v of at-
tributes with labels ajs . Thus, entity descriptions, which do not have the attribute
                                   j
ajs , do not contribute to ERM as (w). In order to capture the importance of the
atomic attribute-specific models, we define the probability of occurrence P (ajs )
                   j
for every ERM as (w)) as P (ajs ) = n(ajs , e1...m
                                             s     )/m, where the numerator denotes
6

the number of entities having attributes with label ajs and m = |Rs | is the total
number of entity results. In summary, an ERM has n fields, each corresponds
to an attribute with a specific label, has a corresponding language model, which
has a probability of occurrence. An example for an ERM constructed from two
entities is illustrated in Fig. 2.


(1)                                                     (2) ERM
                                                              as     P (as )               w : ERM as (w)
               .3-#1%3,%."-)/%
                                      8#+9/%:;<"&/2=%       label       1    world:0.2, on:0.2, wires:0.2, . . .
                 #+0)#%       /&+--",>%                   starring 0.5 klaus:0.25, löwitsch:0.25, barbara:. . .
            -)#)+/)1%
    4567%               )4% /&+--",>%D+-0+-+%A+#),E,%     director      1    rainer:0.33, werner:0.33, fassbinder:0.33
            &'()%                                         released      1    1973:0.5, 1982:0.5
                             1"-)2&3-%                   language 0.5 english:1
             !"#$%         *+",)-%.)-,)-%!+//0",1)-%        type        1    film:1
                                                        (3) et
             &'()%             1"-)2&3-%                                                 a
            -)#)+/)1%
                                                               at                   w : et t (w)
                              #+,>9+>)%
    45@?%               )?%                C)-$+,%           i:title  e:0.33, t:0.33, 1994:0.33
                          #+0)#%                           i:actors coyote:0.5, peter:0.5
                                                         i:directors spielberg:0.33, steven:0.33, i:0.33
                A)-3,"B+%A3//%
                                                         i:producer spielberg:0.33, steven:0.33, i:0.33
                                                              type    movie:1

Fig. 2. Example ERM (2) build from two entities e1 and e2 as illustrated in (1). The
ERM has a field for each attribute with label as . Each field is weighted with P (as ) and
has a relevance model ERM (w)as defining the probability of w occurring in this field.
(3) Representation of an entity et with language models for each attribute with label
at .




3.2           Searching over Web Data Using ERM

We tackle the problem of searching over heterogeneous data in a way similar to
entity consolidation. That is, given the results es ∈ Rs from the source dataset
obtained for the seed query, we aim to find out which entities in the target
datasets are similar to Rs . We use the ERM as the model of those relevant
results. In particular, we estimate which entities et of Dt are relevant for the
query qs by measuring their similarity to the ERM and rank them by decreasing
similarity. First, we capture the values of akt of each entity et using a language
                 ak
model et t (w). Similar to the ERM, this model is defined based on the number
of word occurrences as follows:
                                     P
                              ak        v∈ak n(w, v)
                             et (w) = P t
                               t
                                                                          (2)
                                          v∈ak |v|                          t


As before, the sum goes over all values of attributes with label akt , n(w, v) denotes
that number of occurrences of w in v, and |v| denotes the length of v. Fig. 2 (3)
illustrates an example.
    The similarity of a candidate entity et from the target dataset Dt to the
ERM is calculated as the sum of the weighted similarities between the language
                                                                                       7

model of field ajs and the language model of akt , which is calculated by their cross
entropy H:
                                                               j   ak
                                  X
               Sim(ERM, et ) =       βajs · P (ajs ) · H(ERM as ||et t )          (3)
                                     ajs

    In particular, we apply this similarity calculation only when we know which
field ajs of ERM should be matched against which attribute akt of et . We address
this problem in the next section and show how the ERM can be exploited to
create on-the-fly schema mappings, i.e. mappings between an attribute akt and a
field ajs of ERM . Equation 3 applies to corresponding pairs of attribute and field.
If there is no mapping between ajs and akt , then we use a “maximum distance”.
                                                                    j
This distance is computed as the cross entropy between ERM as and a language
                                                                                j
model that contains all words in the vocabulary expect the ones of ERM as .
    The similarity for each field ajs is weighted by the occurrence probability of
this field, P (ajs ), and the parameter βajs . This parameter gives us the flexibility
to boost the importance of attributes that occur in the query qs as follows:

                                       1 if ajs ∈
                                     
                                                / qs
                              βajs =                                               (4)
                                       b if ajs ∈ qs , b > 1
    For constructing the language models of the ERM and of the candidate en-
tities, a maximum likelihood estimation has been used, which is proportional
to the count of the words in an attribute value. However, such an estimation
assigns zero probabilities to those words not occurring in the attribute value.
                                 ak
In order to address this issue, et t (w) is smoothed using a collection-wide model
cs (w), which captures the probability of w occurring in the entire dataset Ds .
This smoothing is controlled by the Jelinek-Mercer parameter λ. As a result, the
entropy H is calculated over the vocabulary V of field ajs as:

             j   ak                    j                  ak
                         X
  H(ERM as ||et t ) =           ERM as (w) · log( λ · et t (w) + (1 − λ) · cs (w) )   (5)
                        w∈V j
                           as




3.3   On The Fly Integration Using ERM

We want to determine which attribute of an entity needs to be compared to a
given field of the ERM constructed for Ds . The ERM is not only used for search,
but also exploited for this alignment task. The details for computing mappings
                             a1...r                        1...n
between entity attributes et t      and ERM fields ERM as        are presented in
Algorithm (1).The rational of the algorithm is that a field ajs is aligned to an
                                                      j        ak
attribute akt when the cross entropy H(ERM as ||et t ) between their language
models is low, i.e. a mapping is established, if H is lower than a threshold t
(normalized based on the highest cross entropy, line 12).That is, entropy is used
as a similarity measure and the features used to compare attributes are based on
their entire contents represented using language models. The algorithm iterates
8

over n · r comparisons in worst case for an ERM with n fields and an entity
with r = |A0 (et )| attribute labels. Note that this on-the-fly algorithm operates
only on entities that are requested as part of the search process, i.e. n and
r are rather small, compared to full-fledge upfront integration that takes the
entire schema into account, see Table 1 and Table 3. Especially, this computation
comes for free: searching also requires the same computation (Equation 3) and
thus the entropy values computed here are kept and subsequently reused for that.
Moreover, for a faster performance, ERM fields having an occurrence probability
of P (ajs ) < c can be pruned, because of their negligible influence. In addition,
existing mappings can be used to reduce the number of comparisons even further.


Algorithm 1 On the fly Alignment
               1...n           a1...r
Input: ERM as        , Entity et t  , Threshold t ∈ [0, 1]
                                                           0
Output: M appings A := {(ajs , ak       j           k
                                   t )|as ∈ ERM, at ∈ A (et ) ∪ null}
 1: A := new M ap
 2: for all ajs ∈ ERM do
 3:    candM appings := new OrderedByV alueM ap
                       0
 4:    for all akt ∈ A (et ) do
 5:       if akt ∈
                 / A.values then                                        // If not already aligned
                           j  a   k
 6:           h ← H(ERM as ||et t )                                     // see equation (5)
 7:           candM appings.add(atk , h)
 8:       end if
 9:   end for
10:    bestA ← candM appings.f irstV alue
11:    worstA ← candM appings.lastV alue
12:    if bestA < t · worstA then
13:        ait ← candM appings.f irstKey
14:        A.add(ajs , ait )
15:    else
16:        A.add(ajs , null)                                            // no mapping found
17:    end if
18: end for
19: return A


4     Experiments
In this section, we report on the experiments conducted using an implementa-
tion of the proposed approach. We performed the experiments with the following
parameter setting: β = 10, c = 0.8, t = 0.75 and follow the Cranfield [11] method-
ology for the experiments on the search performance.

4.1    Web Datasets
Our experiments were conducted with two RDF Web datasets, DBpedia 3.5.1
and IMdb. DBpedia is a structured representation of Wikipedia which contains
more than 9 million entities of various types, among them about 50k entities
typed as films. The IMdb dataset2 is derived from www.imdb.com and trans-
formed into RDF. The IMdb dataset contains information on movies and films.
2
    We thank Julien Gaugaz and the L3S Research Center for providing us their version
    of the IMdb dataset.
                                                                                  9

Table 1 gives details about each dataset and shows also the varying average size
of entity descriptions |A(e)|.


                                           Rel. Entities IMdb DBpedia
                                              max       834      47
                                              avg.     114.9    10.9
                                             median     21        5
                                              min        1        1
                                           Table 2. Relevant Entities per Query for
                                           each dataset
                    #Distinct
                                 |A(e)|    Source Dataset |ERM | ± StdDev.
Dataset   #Entities Attribute
                                ±StdDev.
                     Labels                IMdb                15.8±6.7
DBpedia    9.1M      39.6K       9±18.2    DBpedia              23±5.4
IMdb       859K        32       11.4±6.4   Table 3. Average Number of Fields of
      Table 1. dataset statistics          an ERM.




4.2   Queries and Ground Truth
Our goal is to find relevant entities in the target datasets Dt for a given query
qs . We determine the relevant entities in Dt by manually rewriting the query qs
to obtain a structured query qt adhering to the vocabulary of Dt . Fig. 3 shows
such a set of queries, one of the queries serves as the source query qs and the
results of the other two queries capture the ground truth for the IR-experiments.
     Note that this ground truth is conservative in the sense that it considers
only matches that can be obtained through query rewriting based on same-as
mappings. However, some possibly relevant entities may either be represented
differently (using attributes that cannot be aligned via same-as mappings) or
do not contain information for some of the query predicates. Hence, this ground
truth defines a lower bound on the actual performance.
     We created two query sets, each containing 23 BGP entity queries of dif-
ferent complexities, ranging from 2 to 4 triple patterns and yielded a varying
number of relevant entities (see Table 2). The structured queries represent real-
world information needs, e.g. retrieve “all movies directed by Steven Spielberg”,
“all movies available in English and also in Hebrew”, or “all movies directed
by Rainer Werner Fassbinder, which were released in 1982”. The last query is
illustrated in Fig. 3.
4.3   Keyword Query Baseline (KW)
We compare the performance of our ERM approach against Semplore [6], which
is based on Lucene, a real-world keyword search system. Semplore creates a
virtual document for every entity description and use the concatenations of at-
tribute and attribute value as document terms. In the same way, we transform
the structured query into a keyword query of disjunctive terms by using the
concatenations of predicates and constants of the structured query as keywords.
The resulting keyword query retrieves all virtual documents representing entity
10



              db:Rainer_Werner_Fassbinder	
                 “Fassbinder,	
  Rainer	
  Werner”	
  

                                       db:director	
                             i:directors	
  

             db:released	
        ?x	
     type	
               i:year	
     ?x	
      type	
  

                1982	
                     db:Film	
        1982	
                     i:movie	
  

             DBpedia	
  db:	
                            IMdb	
  i:	
  

Fig. 3. Example of manually created queries as ground truth. Each query represents
the same information need for one of the datasets.



descriptions, which contain some of the corresponding terms. Fig. 1 illustrates
an example.


4.4   Search Performance

We evaluate search performance using the standard IR measures precision, recall,
mean average precision (MAP) and mean reciprocal rank (MRR). We retrieve
the top five thousand entities, rank them, and compute the metrics based on the
top one thousand entities returned by each method. We evaluate ERM against
the baseline described above. ERM performs alignment on-the-fly and operates
without any upfront integration or prior knowledge about the schemas. When
comparing the performance of ERM and KW in Fig. 4(a), we observe that
ERM outperforms KW . ERM yields a MAP of 0.55, whereas KW achieves a
MAP of about 0.2.
   The robustness of the retrieval performance of ERM can be observed in
Fig. 4(b), which shows the interpolated precision across the recall levels. We
can observe that ERM considerably outperforms the KW baseline across the
recall-levels.

5     Related Work

We have discussed related work throughout the paper. Basically, there are two
existing lines of approaches, one that is based on keyword search and the other
one is structured query rewriting. Our approach represent a novel combina-
tion which combines the flexibility of keyword search with the power of query
rewriting. Just like keyword search, it does not rely on precomputed mappings.
However, it is able to exploit the fine-grained structure of query and results,
which is the primary advantage of structured query rewriting. In addition, it
can leverage existing mappings created by tools like [12, 13].
    More precisely, there exist similar keyword search retrieval models that also
take the structure of the underlying data into account. In fact, the model under-
lying our approach originates from the concept of language models [10], which
                                                                                              11

       1                                                1.0

      0.8                                               0.8




                                            Precision
      0.6                                               0.6
MAP




                                                        0.4
      0.4

                                                        0.2
      0.2                                                     ERM
                                                               KW
                                                        0.0
       0                                                   0.0      0.2   0.4     0.6   0.8   1.0
              KW	
     ERM	
                                                 Recall
(a) Mean Average Precision                                    (b) Precision-Recall Curve

            Fig. 4. IR Performance of ERM compared to KW Baseline.


have been proposed for modeling resources and queries as multinomial distri-
butions over the vocabulary terms. More precisely, the foundation of our work
is established by Lavrenko et al., who propose relevance-based language models
to directly capture the relevance behind document and queries.In this context,
structure information has been exploited for constructing structured relevance
models [9] (SRM). This is the one mostly related to ERM. The difference is that
while the goal of SRM is to predict values of empty fields in a single dataset sce-
nario, ERM targets searching in a completely different setting involving multiple
heterogeneous datasets.
    Beside the query rewriting [2] strategy, there are database oriented approaches
bridging the schema- and vocabulary gap by query relaxation [14, 15]. However,
these approaches need also a precomputing step to either identify duplicates [14]
or derive alternative relations[14, 15].
    The proposed mapping technique is in principle similar to existing tech-
niques, e.g. [16], to the extent that it relies on the same features, i.e., values of
attributes. However, the use of language models for representing these features as
well as the similarity calculation based on entropy is common for retrieval tasks,
but has not been applied to the schema mapping problem before. We consider
this as a promising approach for embedding the pay-as-you-go data integration
paradigm [7] into the search process.


6       Conclusion

We have proposed a novel approach for searching Web datasets using one single
structured seed query that adheres to the vocabulary of just one of the dataset.
It is based on using an entity relevance model computed from the seed query,
which captures the structure and content of relevant results. This model is used
both for matching and ranking relevant results, as well as for performing data
integration on-the-fly. This approach combines the flexibility of keyword search
in the sense that no upfront integration is required, with the power of structured
12

query rewriting techniques that comes from the use of the fined-grained structure
of query and results. The experiments showed that our approach outperform a
common keyword search baseline.
    We demonstrated our approach between one source and one target dataset.
However, there is no inherent limitation to the number of target datasets. So
theoretically, one query can be used ‘to bind them all’.

7    Acknowledgements
This work was supported by the German Federal Ministry of Education and
Research (BMBF) under the iGreen project (grant 01IA08005K).

References
 1. W3C: Recommendation 15 January 2008,
    SPARQL Query Language for RDF http://www.w3.org/TR/rdf-sparql-query/.
 2. Calı̀, A., Lembo, D., Rosati, R.: Query rewriting and answering under constraints
    in data integration systems. In: IJCAI. (2003) 16–21
 3. Choi, N., Song, I.Y., Han, H.: A survey on ontology mapping. SIGMOD Rec. 35
    (September 2006) 34–41
 4. Bonatti, P.A., Hogan, A., Polleres, A., Sauro, L.: Robust and scalable linked
    data reasoning incorporating provenance and trust annotations. J. Web Sem. 9(2)
    (2011) 165–201
 5. Cheng, G., Qu, Y.: Searching linked objects with falcons: Approach, implementa-
    tion and evaluation. Int. J. Semantic Web Inf. Syst. 5(3) (2009) 49–70
 6. Wang, H., Liu, Q., Penin, T., Fu, L., 0007, L.Z., Tran, T., Yu, Y., Pan, Y.: Sem-
    plore: A scalable ir approach to search the web of data. J. Web Sem. 7(3) (2009)
    177–188
 7. Madhavan, J., Cohen, S., Dong, X.L., Halevy, A.Y., Jeffery, S.R., Ko, D., Yu, C.:
    Web-scale data integration: You can afford to pay as you go. In: CIDR. (2007)
    342–350
 8. Pound, J., Mika, P., Zaragoza, H.: Ad-hoc object retrieval in the web of data. In:
    Proc. of the 19th Int. Conf. on WWW. (2010) 771–780
 9. Lavrenko, V., Yi, X., Allan, J.: Information retrieval on empty fields. In: HLT-
    NAACL. (2007) 89–96
10. Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval.
    In: SIGIR. (1998) 275–281
11. Cleverdon, C.: The CRANFIELD Tests on Index Langauge Devices. Aslib (1967)
12. Hu, W., Qu, Y.: Falcon-ao: A practical ontology matching system. J. Web Sem.
    6(3) (2008) 237–239
13. David, J.: Aroma results for oaei 2009. In: Ontology Matching Workshop, ISWC.
    (2009)
14. Zhou, X., Gaugaz, J., Balke, W.T., Nejdl, W.: Query relaxation using malleable
    schemas. In: SIGMOD Conference. (2007) 545–556
15. Elbassuoni, S., Ramanath, M., Weikum, G.:           Query relaxation for entity-
    relationship search. In: ESWC. (2011) 62–76
16. Duan, S., Fokoue, A., Srinivas, K.: One size does not fit all: Customizing ontology
    alignment using user feedback. In: International Semantic Web Conference. (2010)
    177–192