<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On Rewriting and Answering Queries in OBDA Systems for Big Data (Short Paper)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Diego Calvanese</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ian Horrocks</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ernesto Jimenez-Ruiz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Evgeny Kharlamov</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Meier</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mariano Rodriguez-Muro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmitriy Zheleznyakov</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Free University of Bozen-Bolzano</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Oxford University</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>fluid Operations AG</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>The project Optiqueaims at providing an end-to-end solution for scalable access to Big Data integration, were end users will formulate queries based on a familiar conceptualization of the underlying domain. From the users queries the Optique platform will automatically generate appropriate queries over the underlying integrated data, optimize and execute them. In this paper we discuss Optique's automatic generation of queries and two systems to support this process: QUEST and PEGASUS The automatic query generation is important and challenging, especially when the queries are over heterogeneous distributed databases and streams, and Optique will provide a scalable solution for it.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>freeing up expert time would lead to even greater value creation through deeper analysis
and improved decision making.</p>
      <p>
        The approach, known as “Ontology-Based Data Access” (OBDA) [
        <xref ref-type="bibr" rid="ref13 ref2">13,2</xref>
        ], has the
potential to address the data access problem by automating the translation process of the
users information needs. The key idea is to use an ontology that confronts the user with a
conceptual model of the problem domain. The user formulates their information needs,
that is, requests in terms of the ontology and then receives the answers in the same
understandable form. These requests should be executed over the data automatically,
without the intervention of IT-experts. To this end, a set of mappings is maintained
which describes the relationship between the terms in the ontology and data sources.
      </p>
      <p>
        Automating the translation from users’ requests expressed, e.g., in SPARQL, to
highly optimized executable code over data sources, e.g., SQL queries, is a key
challenge in development of OBDA systems. In this paper we will discuss this automation
in the context of the recently started EU project Optique [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. We will present the Query
Transformation component of the Optique OBDA platform. More generally, Optique
aims at developing an OBDA system of the new generation integrated via the
Information Workbench platform [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ](cf. Figure 1). It will address a number of issues besides
the automated query translation. In particular, it will address: (i) user-friendly query
formulation interface, (ii) maintenance of an ontology and mappings (iii) processing and
analytics over streaming data, and (iv) distributed query optimization and execution.
      </p>
      <p>
        In Section 2, we present the architecture of Optique’s query transformation
module. In Section 2.1, we present typical scenarios for the use of its component. In
Sections 3.1, we introduce the QUEST system [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] that is intended to be the core of the
Optique’s query transformation solution. In Section 3.2, we present a promising new
system PEGASUS, which we plan to use to support the Optique’s query transformation.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Architecture</title>
      <p>In Figure 1, you can see a general, conceptual overview of the Optique OBDA system.
Due to this, it mentions only main components of the system, while the system includes
more component, and shows the data flow, depicted with arrows.</p>
      <p>Let us now see in details how the Query Transformation (QT) module works. You
can find its architecture in Figure 2. Note that the figure mentions only those
components that are relevant to the Query Transformation. The meaning of the arrows in
Figure 2 is the following: an arrows goes from a (sub-)module X to a (sub-)module Y
if X can call Y during the run.</p>
      <p>
        The Query Transformation component includes the following subcomponents:
The Setup module can be thought of as the initialization module of the system. Its task
is to receive configurations from the Configuration module (which is an Optique
component external to the QT module) beforehand the actual query transformation happens,
and distribute this configuration to the other modules of the QT module (cf. Section 2.1).
The Semantic indexing and Materialization modules are in charge of creation and
maintenance of so-called semantic index [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>The Query Transformation Manager is the main QT submodule. It coordinates the
work of the other submodules and manages the query transformation process.
JDBC, Tei d
...</p>
      <p>Stream connector
RDBs, triple stores,
temporal DBs, etc.</p>
      <p>...</p>
      <p>data streams
Group of
components</p>
      <p>Front end:
mainly Web-based</p>
      <p>Colouring Convention Applications</p>
      <p>Optique solution Receiving
External solution Answers</p>
      <sec id="sec-2-1">
        <title>Users</title>
        <p>Expert
users
The Query rewriting module is in charge of query rewriting process. It transforms
queries from the ontological vocabulary, e.g., SPARQL queries, into a format which is
required to query the data sources, e.g., SQL (see Section 2.1 for details).</p>
        <sec id="sec-2-1-1">
          <title>The Syntactic Query Optimization and Semantic Query Optimization modules are</title>
          <p>subroutines of the Query rewriting. They perform query optimization during query
rewriting process (see Section 2.1 for details).</p>
          <p>The Query execution module is in charge of the actual query evaluation.
The Federation module is a subroutine of the Query execution module that is needed
to perform query answering over federated databases.</p>
          <p>The Analytics module analyzes answers to a query and decides how to proceed with
them further (see Section 2.1 for details).</p>
          <p>Also, the QT module interacts with other components of the Optique OBDA system:
The Query Formulation module provides a query interface for end-users. This module
receives queries from end-users and send them to the QT module, e.g., via Sesame API.
Integrated via Information Workbench</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Presentation</title>
      </sec>
      <sec id="sec-2-3">
        <title>Layer</title>
        <p>Ontology Processing
Ontology reasoners
Ontology modularization
O
W
L
A
P
I
Syntactic QOpt
S1-PtiAmReQQL StrQeam
Semantic QOpt.</p>
        <p>S1-PtiAmReQQL StrQeam
Shared
database</p>
      </sec>
      <sec id="sec-2-4">
        <title>Application</title>
      </sec>
      <sec id="sec-2-5">
        <title>Layer</title>
      </sec>
      <sec id="sec-2-6">
        <title>Data,</title>
      </sec>
      <sec id="sec-2-7">
        <title>Resource</title>
      </sec>
      <sec id="sec-2-8">
        <title>Layer</title>
      </sec>
      <sec id="sec-2-9">
        <title>Components</title>
        <p>Component</p>
        <p>Stream Analytics
Query Transformation</p>
        <p>Analytics
Query rewriting
S1-PtiAmReQQL StrQeam
Federation
module</p>
        <p>Query
Transformation</p>
        <p>Manager
Query execution</p>
        <p>S1-PtiAmReQQL StrQeam
Distributed Query
Optimization and</p>
        <p>Processing</p>
        <p>Query Formulation</p>
        <p>Sesame
Setup</p>
        <p>Module
Sem indexing
S1-PtiAmReQQL StrQeam
Materialization
module</p>
        <p>Optique's configuration</p>
        <p>interface
Infor. Workbench
frontend API*
Configuration
of modules</p>
        <p>LDAP
authentification
a
t
a
d
a
t
e
M</p>
        <p>Sesame
Shared
triple
store
The Configuration module provides the configuration for the QT module that is
required for query transformation performance.</p>
        <p>The Ontology Processing is a group of components such as ontology reasoners,
modularization systems, etc. It is called by the QT module to perform semantic optimization.</p>
        <sec id="sec-2-9-1">
          <title>The Distributed Query Optimization and Processing component receives rewritten</title>
          <p>queries from the QT module and performs their evaluation over data sources.
The Shared database contains the technical data required for data answering process,
such as semantic indices, answers to queries, etc.</p>
          <p>The Shared triple store contains the data that can be used by (the most of) the
components of the Optique OBDA system. E.g., it contains the ontology, the mappings, the
query history, etc. The QT module calls this store, for example, for reasoning over the
ontology during the semantic optimization process.</p>
          <p>The Stream analytics module provides analysis of answers to the stream quires.
Data sources (RDBs, triple stores, data streams) can be also accessed by the QT module
during the query execution (see Section 2.1 for details).</p>
        </sec>
        <sec id="sec-2-9-2">
          <title>2.1 Query Transformation Process</title>
          <p>In this section we will discuss how the QT module performs the query transformation
task. This process can be divided into several stages. Assume that the QT module
receives a query from the Query Formulation module. Then the first stage of the query
transformation process is the initialization of the process.</p>
          <p>
            Initialization. At this stage the Configuration module sends the configuration to the
Setup module, which, in turn, configure the other modules of the QT module. The
initialization includes several steps in which the input ontology and mappings get
analyzed and optimized so as to allow the rewriting and optimization algorithms to be fast,
and the query evaluation over the data sources to be more efficient (find more details
in [
            <xref ref-type="bibr" rid="ref12 ref14">14,12</xref>
            ]). This includes the application of the Semantic Index technic.
Query rewriting. After the initialization, the query transformation itself starts. The
Query Transformation Manager receives a (SPARQL) query Q from the Query
Formulation module. Then it resends the query to the Query Rewriting module that rewrites the
query in the required format, e.g., SQL for querying relational databases, or Streaming
SPARQL for querying data streams. Further, for the sake of simplicity, we will assume
that the target query format is SQL. Along with the rewriting, the Query Rewriting
module optimizes the rewritten query both syntactically and semantically.
Syntactic optimization. During the transformation process, the initial query may be
turned into a number of SQL queries Q1; : : : ; Qn such that their union is equivalent
to Q. In the Syntactic optimization stage, these queries get optimized to improve the
performance, e.g., redundant joins, conditions, etc. within this SQL queries are deleted.
The methods, used to detect what parts of the queries have to be optimized, are
syntactical, that is they are based only on the shape of a query and do not involve any reasoning.
Semantic optimization. Then the semantic optimization of the queries is performed. The
queries get optimized in the similar manner as in the case of Syntactic optimization. The
difference is that the methods, used in Semantic optimization, are semantic, that is they
take into account query containment, integrity constraints of the data sources, etc.
Query execution. After rewriting and optimization, the queries Q0i1 ; : : : ; Q0im are
returned to the Query Transformation Manager. It sends them to the Query Execution
module. This module decides what queries of Q0i1 ; : : : ; Q0im , if any, need to be
evaluated using distributing query execution, and what can be evaluated directly by the
standard query answering facilities. In the former case, the corresponding queries are
sent to the Distributed Query module. In the latter case, the corresponding queries are
evaluated over the data sources by standard means. If some of the queries have to be
evaluated over over a federated database system, the Query Execution module entrusts
this task to the Federation module.
          </p>
          <p>Query answer management. After the query evaluation process, the answers to the
queries, that has been sent directly to the data sources, are returned to the Query
Transformation Manager. The manager transform them into the required format and send
them back to the Query Formulation module, which takes care of representing the
answers to end-users.</p>
          <p>The queries that have been sent to the Distributed Query Optimization and
Processing module do not necessarily go directly to the Query Transformation Manager,
but rather to the Shared database. The reason is that the answer can be very big (up
to several GBs), so sending them directly to the QT component would hang the
system. The Query Transformation Manager receives the signal that the answers are in the
Shared database, and some metadata about the answer. Then, together with the
Analytics module, it decides how to proceed further. The answers to one-time queries, e.g.
SQL queries over relational databases, eventually go to the Query Formulation module,
while the answers to stream queries go to the Stream analytics module.
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Possible Implementations</title>
      <p>In this section we will discuss two options for the implementation of the Query
Transformation component: QUEST and PEGASUS.
3.1</p>
      <sec id="sec-3-1">
        <title>Quest</title>
        <p>
          One of the options for the QT module is the QUEST implementation [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. QUEST can
be used in several stages of the query transformation process. In particular, it performs
the following tasks: (i) the initialization stage of the process in the manner described
in Section 2.1; (ii) query execution that is based on standard relational technologies;
(iii) rewriting and optimization. We will consider the last task in more details.
Rewriting and optimization. Also, QUEST performs query rewriting and optimization
for one-time queries, which is done in QUEST by means of query rewriting into SQL.
Given an input query Q, two steps are performed at query transformation time: (i) query
rewriting, where Q is transformed into a new query Q0 that takes into account the
semantics of the OBDA ontology; (ii) query unfolding, where Q0 is transformed into a
single SQL query using the mappings of the OBDA system. We provide now an intuitive
description of these two steps.
        </p>
        <p>
          – Query rewriting. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] Query rewriting uses the ontology axioms to compute a set
of queries that encode the semantics of the ontology, such that if we evaluate the
union of these queries over the data, we will obtain sound and complete answers.
The query rewriting process works using the ontology axioms and unification to
generate more specific queries from the original, more general query. In QUEST,
this process is iterative, and stops when no more queries can be generated.
We note that the query rewriting algorithm is a variation of the TreeWitness
rewriter [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] optimized in order to obtain a fast query rewriting procedure that
generates a minimal amount of queries.
– Query unfolding. Query unfolding uses the mapping program (see [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]) to
transform the rewritten query into SQL. First, QUEST transforms the rewritten query
into a Datalog program. Then, the program is resolved [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] against the mapping
program, until no resolution step can be applied. At each resolution step, QUEST
replaces an atom formulated in terms of ontology predicates, with the body of a
mapping rule. In case there is no matching rule for an atom, that sub-query is
logically empty and is eliminated. Finally, when no more resolution steps are possible,
we have a new Datalog program, formulated in terms of database tables, that can be
directly translated into a union of select-project-join SQL queries. Also at this step,
QUEST makes use of query containment w.r.t. dependencies to detect redundant
queries and to eliminate redundant join operations in individual queries (i.e., using
primary key metadata). Last, QUEST always attempts to generate simple queries,
with no sub-queries or structurally complex SQL. This is necessary to ensure that
most relational database engines are able to generate efficient execution plans.
3.2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Pegasus</title>
        <p>
          Another option for the query transformation phase is the PEGASUS implementation
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. PEGASUS adapts the well-known Chase &amp; Backchase (C&amp;B) algorithm [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] and
outperforms existing C&amp;B implementations by usually two orders of magnitude.
Originally, C&amp;B was introduced in the context of semantic query optimization minimizing
conjunctive queries under integrity constraints. Just recently, it was proved that C&amp;B
can be reconfigured for OBDA. More precisely, it can be used to compute perfect
reformulations exploiting its logical properties. Thus, PEGASUS can be used in several
stages of the query transformation process:
– semantic query optimization at SPARQL level, thus removing redundant join
operations w.r.t the ontology,
– query rewriting, i.e. computing the perfect reformulation, and
– semantic query optimization at SQL level, i.e. optimizing SQL queries according
to the constraints encoded in a database schema.
        </p>
        <p>
          Generally speaking, C&amp;B takes a basic graph pattern query and an ontology as input
and applies two phases to optimize the input query: (i) the classical chase procedure
[
          <xref ref-type="bibr" rid="ref1 ref10">10,1</xref>
          ] is used to deploy a preprocessing step in which a universal plan is computed and
(ii) then in the proceeding backchase phase all (exponentially many) subqueries of the
universal plan are enumerated and checked for equivalence to the original query, i.e. the
backchase phase is the actual optimization process.
        </p>
        <p>C&amp;B makes use of the classical chase algorithm, which does not necessarily
terminate. While semantic query optimization typically assumes a finite chase, OBDA does
not necessarily do so. Thus, C&amp;B have been extended in such a way that it can
handle infinite chase sequences e.g. for DL-Lite ontologies. Furthermore, C&amp;B have been
extended to handle SPARQL 1.1 queries beyond basic graph patterns.</p>
        <p>PEGASUS works in a bottom-up manner, i.e. during the Backchase phase it
enumerates subqueries by increasing number of triple patterns. However, this is not done in a
naive way. PEGASUS heavily makes use of pruning and optimization strategies to avoid
inspecting a large portion of unnecessary queries at all. The main optimization steps in
PEGASUS can be summarized as follows:
– The guided backchase avoids inspecting subqueries that will be subsumed by other
queries in the perfect rewriting.
– Avoiding containment checks between queries with many triple patterns by
reducing containment checks to sets of queries with few triple patterns only.
– A necessary condition for query containment that can be easily computed.
PEGASUS performs full containment checks only when this condition is satisfied.</p>
        <p>Applying these optimization steps input SPARQL (or SQL) queries can be
efficiently rewritten to a form in which they can then be further processed.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>In this paper we considered the query transformation and optimization problems in
context of query answering in the Optique OBDA system. We introduced the Query
Transformation component of the system and discussed the options of how this
component can be implemented: QUEST and PEGASUS. We briefly introduced these two
implementations and discussed their peculiarities.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Beeri</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vardi</surname>
          </string-name>
          , M.Y.:
          <article-title>A Proof Procedure for Data Dependencies</article-title>
          .
          <source>J. ACM</source>
          <volume>31</volume>
          (
          <issue>4</issue>
          ),
          <fpage>718</fpage>
          -
          <lpage>741</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giacomo</surname>
            ,
            <given-names>G.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lembo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenzerini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poggi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rodriguez-Muro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosati</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruzzi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Savo</surname>
            ,
            <given-names>D.F.</given-names>
          </string-name>
          :
          <article-title>The MASTRO system for ontology-based data access</article-title>
          .
          <source>Semantic Web</source>
          <volume>2</volume>
          (
          <issue>1</issue>
          ),
          <fpage>43</fpage>
          -
          <lpage>53</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giacomo</surname>
            ,
            <given-names>G.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lembo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenzerini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosati</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Tractable Reasoning and Efficient Query Answering in Description Logics: The DL-Lite Family</article-title>
          .
          <source>JAR</source>
          <volume>39</volume>
          (
          <issue>3</issue>
          ) (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Crompton</surname>
          </string-name>
          , J.: Keynote talk at the W3C Workshop on Semantic Web in Oil &amp; Gas Industry: Houston, TX, USA,
          <fpage>9</fpage>
          -10
          <string-name>
            <surname>December</surname>
          </string-name>
          (
          <year>2008</year>
          ), available from http://www.w3.org/
          <year>2008</year>
          /12/ogws-slides/Crompton.pdf
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Deutsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Popa</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tannen</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Physical data independence, constraints, and optimization with universal plans</article-title>
          . pp.
          <fpage>459</fpage>
          -
          <lpage>470</lpage>
          . VLDB '
          <volume>99</volume>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Giese</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haase</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horrocks</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ioannidis</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kllapi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koubarakis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenzerini</surname>
            ,
            <given-names>M.,</given-names>
          </string-name>
          <article-title>M o¨ller</article-title>
          , R., O¨ zep,
          <string-name>
            <given-names>O.</given-names>
            ,
            <surname>Rodriguez</surname>
          </string-name>
          <string-name>
            <surname>Muro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Rosati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Schlatte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Soylu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Waaler</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Scalable End-user Access to Big Data</article-title>
          . In: Rajendra Akerkar:
          <article-title>Big Data Computing</article-title>
          . Florida : Chapman and Hall/CRC. To appear. (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Haase</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwarte</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The information workbench as a self-service platform for linked data applications</article-title>
          . In: COLD (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kikot</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kontchakov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zakharyaschev</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Conjunctive Query Answering with OWL 2 QL</article-title>
          . In: KR (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Leitsch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The resolution calculus</article-title>
          .
          <source>Texts in theoretical computer science</source>
          , Springer (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Maier</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mendelzon</surname>
            ,
            <given-names>A.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sagiv</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Testing Implications of Data Dependencies</article-title>
          .
          <source>ACM Trans. Database Syst</source>
          .
          <volume>4</volume>
          (
          <issue>4</issue>
          ),
          <fpage>455</fpage>
          -
          <lpage>469</lpage>
          (
          <year>1979</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Meier</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The backchase revisited</article-title>
          .
          <source>Submitted for Publication</source>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Rodriguez-Muro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Dependencies:
          <article-title>Making Ontology Based Data Access Work</article-title>
          . In: AMW (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Rodriguez-Muro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>High Performance Query Answering over DL-Lite Ontologies</article-title>
          . In: KR (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Rodriguez-Muro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Quest, an OWL 2 QL Reasoner for Ontology-based Data Access</article-title>
          . In: OWLED (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>