<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cascading Map-Side Joins over HBase for Scalable Join Processing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alexander Sch¨atzle</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martin Przyjaciel-Zablocki</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christopher Dorner</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thomas Hornung</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Georg Lausen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Freiburg</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>59</fpage>
      <lpage>74</lpage>
      <abstract>
        <p>One of the major challenges in large-scale data processing with MapReduce is the smart computation of joins. Since Semantic Web datasets published in RDF have increased rapidly over the last few years, scalable join techniques become an important issue for SPARQL query processing as well. In this paper, we introduce the Map-Side Index Nested Loop Join (MAPSIN join) which combines scalable indexing capabilities of NoSQL data stores like HBase, that suffer from an insufficient distributed processing layer, with MapReduce, which in turn does not provide appropriate storage structures for efficient large-scale join processing. While retaining the flexibility of commonly used reduce-side joins, we leverage the effectiveness of map-side joins without any changes to the underlying framework. We demonstrate the significant benefits of MAPSIN joins for the processing of SPARQL basic graph patterns on large RDF datasets by an evaluation with the LUBM and SP2Bench benchmarks. For selective queries, MAPSIN join based query execution outperforms reduce-side join based execution by an order of magnitude.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Most of the information in the classical ”Web of Documents” is designed for
human readers, whereas the idea behind the Semantic Web is to build a ”Web
of Data” that enables computers to understand and use the information in the
web. The advent of this Web of Data gives rise to new challenges with regard to
query evaluation on the Semantic Web. The core technologies of the Semantic
Web are RDF (Resource Description Framework) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for representing data in
a machine-readable format and SPARQL [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] for querying RDF data. However,
querying RDF datasets at web-scale is challenging, especially because the
computation of SPARQL queries usually requires several joins between subsets of the
data. On the other side, classical single-place machine approaches have reached
a point where they cannot scale with respect to the ever increasing amount
of available RDF data (cf. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]). Renowned for its excellent scaling properties,
the MapReduce paradigm [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] is an attractive candidate for distributed SPARQL
processing. The Apache Hadoop platform is the most prominent and widely used
open-source MapReduce implementation. In the last few years many companies
have built-up their own Hadoop infrastructure but there are also ready-to-use
cloud services like Amazon’s Elastic Compute Cloud (EC2) offering the Hadoop
platform as a service (PaaS). Thus, in contrast to specialized distributed RDF
systems like YARS2 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] or 4store [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], the use of existing Hadoop MapReduce
infrastructures enables scalable, distributed and fault-tolerant SPARQL
processing out-of-the-box without any additional installation or management overhead.
Following this avenue, we introduced the PigSPARQL project in [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] that offers
full support for SPARQL 1.0 and is implemented on top of Hadoop. However,
while the performance and scaling properties of PigSPARQL for complex
analytical queries are competitive, the performance for selective queries is not satisfying
due to the lack of built-in index structures and unnecessary data shuffling as join
computation is done in the reduce phase.
      </p>
      <p>In this paper we present a new MapReduce join technique, the Map-Side
Index Nested Loop Join (MAPSIN join), that uses the indexing capabilities of a
distributed NoSQL data store to improve query performance of selective queries.
MAPSIN joins are completely processed in the map phase to avoid costly data
shuffling by using HBase as underlying storage layer. Our evaluation shows an
improvement of up to one order of magnitude over the common reduce-side join
for selective queries. Overall, the major contributions of this paper are as follows:
– We describe a space-efficient storage schema for large RDF graphs in HBase
while retaining favourable access characteristics. By using HBase instead of
HDFS, we can avoid shuffling join partitions across the network and instead
only access the relevant join partners in each iteration.
– We present the MAPSIN join algorithm, which can be evaluated cascadingly
in subsequent MapReduce iterations. In contrast to other approaches, we do
not require an additional shuffle and reduce phase in order to preprocess the
data for consecutive joins. Moreover, we do not require any changes to the
underlying frameworks.
– We demonstrate an optimization of the basic MAPSIN join algorithm for the
efficient processing of multiway joins. This way, we can save n MapReduce
iterations for star join queries with n + 2 triple patterns.</p>
      <p>The paper is structured as follows: Section 2 provides a brief introduction to the
technical foundations for this paper. Section 3 describes our RDF storage schema
for HBase, while Section 4 presents the MAPSIN join algorithm. We continue
with a presentation of the evaluation of our approach in Section 5, followed by
a discussion of related work in Section 6. We conclude in Section 7 and give an
outlook on future work.
2
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <sec id="sec-2-1">
        <title>RDF &amp; SPARQL</title>
        <p>
          RDF [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] is the W3C recommended standard model for representing knowledge
about arbitrary resources, e.g. articles and authors. An RDF dataset consists of a
set of RDF triples in the form (subject, predicate, object ) that can be interpreted
as ”subject has property predicate with value object ”. For clarity of presentation,
we use a simplified RDF notation in the following. It is possible to visualize an
RDF dataset as directed, labeled graph where every triple corresponds to an
edge (predicate) from subject to object. Figure 1 shows an RDF graph with
information about articles and corresponding authors.
        </p>
        <p>"PigSPARQL"
title
year
"2011"</p>
        <p>Article1
author
Martin
author
cite
author</p>
        <p>Alex
author "RDFPath"</p>
        <p>title
Article2 year</p>
        <p>"2011"</p>
        <p>
          SPARQL is the W3C recommended declarative query language for RDF. A
SPARQL query defines a graph pattern P that is matched against an RDF graph
G. This is done by replacing the variables in P with elements of G such that the
resulting graph is contained in G (pattern matching). The most basic constructs
in a SPARQL query are triple patterns, i.e. RDF triples where subject, predicate
and object can be variables, e.g. (?s, p, ?o). A set of triple patterns concatenated
by AND (.) is called a basic graph pattern (BGP) as illustrated in Figure 1. The
query asks for all articles with known title, author and year of publication. The
result of a BGP is computed by joining the variable mappings of all triple
patterns on their shared variables, in this case ?article. For a detailed definition of
the SPARQL syntax we refer the interested reader to the official W3C
Recommendation [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. A formal definition of the SPARQL semantics can also be found
in [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. In this paper we focus on efficient join processing with MapReduce and
NoSQL (i.e. HBase) and therefore only consider SPARQL BGPs.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>MapReduce</title>
        <p>
          The MapReduce programming model [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] enables scalable, fault tolerant and
massively parallel computations using a cluster of machines. The basis of Google’s
MapReduce is the distributed file system GFS [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] where large files are split into
equal sized blocks, spread across the cluster and fault tolerance is achieved by
replication. The workflow of a MapReduce program is a sequence of MapReduce
iterations each consisting of a Map and a Reduce phase separated by a so-called
Shuffle &amp; Sort phase. A user has to implement map and reduce functions which
are automatically executed in parallel on a portion of the data. The map function
gets invoked for every input record represented as a key-value pair. It outputs a
list of new intermediate key-value pairs which are then sorted and grouped by
their key. The reduce function gets invoked for every distinct intermediate key
together with the list of all according values and outputs a list of values which
can be used as input for the next MapReduce iteration.
        </p>
        <p>We use Apache Hadoop as it is the most popular open-source implementation
of Google’s GFS and MapReduce framework that is used by many companies
like Yahoo!, IBM or Facebook.</p>
        <p>
          Map-Side vs. Reduce-Side Join. Processing joins with MapReduce is a
challenging task as datasets are typically very large [
          <xref ref-type="bibr" rid="ref20 ref5">5,20</xref>
          ]. If we want to join two
datasets with MapReduce, L R, we have to ensure that the subsets of L and R
with the same join key values can be processed on the same machine. For joining
arbitrary datasets on arbitrary keys we generally have to shuffle data over the
network or choose appropriate pre-partitioning and replication strategies.
        </p>
        <p>
          The most prominent and flexible join technique in MapReduce is called
Reduce-Side Join [
          <xref ref-type="bibr" rid="ref20 ref5">5,20</xref>
          ]. Some literature also refer to it as Repartition Join [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]
as the idea is based on reading both datasets (map phase) and repartition them
according to the join key (shuffle phase). The actual join computation is done
in the reduce phase. The main drawback of this approach is that both datasets
are completely transferred over the network regardless of the join output. This
is especially inefficient for selective joins and consumes a lot of network
bandwidth. Another group of joins is based on getting rid of the shuffle and reduce
phase to avoid transferring both datasets over the network. This kind of join
technique is called Map-Side Join since the actual join processing is done in
the map phase. The most common one is the Map-Side Merge Join [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
However, this join cannot be applied on arbitrary datasets. A preprocessing step is
necessary to fulfill several requirements: datasets have to be sorted and equally
partitioned according to the join key. If the preconditions are fulfilled, the map
phase can process an efficient parallel merge join between pre-sorted partitions
and data shuffling is not necessary. However, if we want to compute a sequence
of such joins, the shuffle and reduce phases are needed to guarantee that the
preconditions for the next join iteration are fulfilled. Therefore, map-side joins
are generally hard to cascade and the advantage of avoiding a shuffle and reduce
phase is lost. Our MAPSIN join approach is designed to overcome this drawback
by using the distributed index of a NoSQL system like HBase.
2.3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>HBase</title>
        <p>
          HBase is a distributed, scalable and strictly consistent column-oriented NoSQL
data store, inspired by Google’s Bigtable [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] and well integrated into Hadoop.
Hadoop’s distributed file system, HDFS, is designed for sequential reads and
writes of very large files in a batch processing manner but lacks the ability to
access data randomly in close to real-time. HBase can be seen as an additional
storage layer on top of HDFS that supports efficient random access. The data
model of HBase corresponds to a sparse multi-dimensional sorted map with the
following access pattern:
(T able, RowKey, F amily, Column, T imestamp) → V alue
The rows of a table are sorted and indexed according to their row key and every
row can have an arbitrary number of columns. Columns are grouped into column
families and column values (denoted as cell) are timestamped and thus support
multiple versions. HBase tables are dynamically split into regions of contiguous
row ranges with a configured maximum size. When a region becomes too large,
it is automatically split into two regions at the middle key (auto-sharding).
        </p>
        <p>However, HBase has neither a declarative query language nor built-in
support for native join processing, leaving higher-level data transformations to the
overlying application layer. In our approach we propose a map-side join strategy
that leverages the implicit index capabilities of HBase to overcome the usual
restrictions of map-side joins as outlined in Section 2.2.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>RDF Storage Schema for HBase</title>
      <p>In contrast to relational databases, NoSQL data stores do neither have a common
data model nor a common query language like SQL. Hence, the implementation
of our join approach strongly relies on the actual NoSQL store used as backend.
In our initial experiments we considered HBase and Cassandra, two popular
NoSQL stores with support for MapReduce. We decided to use HBase for our
implementation as it proved to be more stable and also easier to handle in our
cluster since HBase was developed to work with Hadoop from the beginning.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] the authors adopted the idea of Hexastore [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] to index all possible
orderings of an RDF triple for storing RDF data in HBase. This results in six
tables in HBase allowing to retrieve results for any possible SPARQL triple
pattern with a single lookup on one of the tables (except for a triple pattern with
three variables). However, as HDFS has a default replication factor of three and
data in HBase is stored in files on HDFS, an RDF dataset is actually stored
18 times using this schema. But it’s not only about storage space, also loading
a web-scale RDF dataset into HBase becomes very costly and consumes many
resources. Our storage schema for RDF data in HBase is inspired by [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and
uses only two tables, Ts po and To ps. We extend the schema with a triple
pattern mapping that leverages the power of predicate push-down filters in HBase
to overcome possible performance shortcomings of a two table schema.
Furthermore, we improve the scalibility of the schema by introducing a modified row key
design for class assignments in RDF which would otherwise lead to overloaded
regions constraining both scalability and performance.
      </p>
      <p>In Ts po table an RDF triple is stored using the subject as row key, the
predicate as column name and the object as column value. If a subject has
more than one object for a given predicate (e.g. an article having more than
one author), these objects are stored as different versions in the same column.
The notation Ts po indicates that the table is indexed by subject. Table To ps
follows the same design. In both tables there is only one single column family
that contains all columns. Table 1 illustrates the corresponding Ts po table for
the RDF graph in Section 2.1.</p>
      <p>
        At first glance, this storage schema seems to have performance drawbacks
when compared to the six table schema in [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] since there are only indexes for
subjects and objects. However, we can use the HBase Filter API to specify
additional column filters for table index lookups. These filters are applied directly on
server side such that no unnecessary data must be transferred over the network
(predicate push-down). As already mentioned in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], a table with predicates as
row keys causes scalability problems since the number of predicates in an
ontology is usually fixed and relatively small which results in a table with just a few
very fat rows. Considering that all data in a row is stored on the same machine,
the resources of a single machine in the cluster become a bottleneck. Indeed, if
only the predicate in a triple pattern is given, we can use the HBase Filter API
to answer this request with a table scan on Ts po or To ps using the predicate as
column filter. Table 2 shows the mapping of every possible triple pattern to the
corresponding HBase table. Overall, experiments on our cluster showed that the
two table schema with server side filters has similar performance characteristics
compared to the six table schema but uses only one third of storage space.
      </p>
      <p>Our experiments also revealed some fundamental scaling limitations of the
storage schema caused by the To ps table. In general, an RDF dataset uses a
relatively small number of classes but contains many triples that link resources
to classes, e.g. (Alex, rdf:type, foaf:Person). Thus, using the object of a triple
as row key means that all resources of the same class will be stored in the
same row. With increasing dataset size these rows become very large and exceed
the configured maximum region size resulting in overloaded regions that contain
only a single row. Since HBase cannot split these regions the resources of a single
machine become a bottleneck for scalability. To circumvent this problem we use
a modified To ps row key design for triples with predicate rdf:type. Instead of
using the object as row key we use a compound row key of object and subject,
e.g. (foaf:Person|Alex). As a result, we can not access all resources of a class
with a single table lookup but as the corresponding rows will be consecutive in
To ps we can use an efficient range scan starting at the first entry of the class.
4</p>
    </sec>
    <sec id="sec-4">
      <title>MAPSIN Join</title>
      <p>
        The major task in SPARQL query evaluation is the computation of joins
between triple patterns, i.e. basic graph patterns. However, join processing on large
RDF datasets, especially if it involves more than two triple patterns, is
challenging [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Our approach combines the scalable storage capabilities of NoSQL data
stores (i.e. HBase), that suffer from a suitable distributed processing layer, with
MapReduce, a highly scalable and distributed computation framework, which
in turn does not support appropriate storage structures for large scale join
processing. This allows us to catch up with the flexibility of reduce-side joins while
utilizing the effectiveness of a map-side join without any changes to the
underlying frameworks.
      </p>
      <p>
        First, we introduce the needed SPARQL terminology analogous to [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]: Let
V be the infinite set of query variables and T be the set of valid RDF terms.
Definition 1. A (solution) mapping µ is a partial function µ : V → T . We
call µ(?v) the variable binding of µ for ?v. Abusing notation, for a triple pattern
p we call µ(p) the triple pattern that is obtained by substituting the variables
in p according to µ. The domain of µ, dom(µ), is the subset of V where µ is
defined and the domain of p, dom(p), is the subset of V used in p. The result of
a SPARQL query is a multiset of solution mappings Ω .
      </p>
      <p>Definition 2. Two mappings µ1, µ2 are compatible if, for every variable ?v ∈
dom(µ1) ∩ dom(µ2), it holds that µ1(?v) = µ2(?v). It follows that mappings with
disjoint domains are always compatible and the set-union (merge) of µ1 and µ2,
µ1 ∪ µ2, is also a mapping.
4.1</p>
      <sec id="sec-4-1">
        <title>Base Case</title>
        <p>To compute the join between two triple patterns, p1 p2, we have to merge
the compatible mappings for p1 and p2. Therefore, it is necessary that subsets
of both multisets of mappings are brought together such that all compatible
mappings can be processed on the same machine.</p>
        <p>Our MAPSIN join technique computes the join between p1 and p2 in a
single map phase. At the beginning, the map phase is initialized with a parallel
distributed HBase table scan for the first triple pattern p1 where each machine
retrieves only those mappings that are locally available. This is achieved by
utilizing a mechanism for allocating local records to map functions, which is
supported by the MapReduce input format for HBase. The map function is
invoked for each retrieved mapping µ1 for p1. To compute the partial join between
p1 and p2 for the given mapping µ1, the map function needs to retrieve those
mappings for p2 that are compatible to µ1 based on the shared variables
between p1 and p2. At this point, the map function utilizes the input mapping µ1
to substitute the shared variables in p2, i.e. the join variables. The substituted
triple pattern psub is then used to retrieve the compatible mappings with a table
2
lookup in HBase following the triple pattern mapping outlined in Table 2. Since
there is no guarantee that the corresponding HBase entries reside on the same
machine, the results of the request have to be transferred over the network in
general. However, in contrast to a reduce-side join approach where a lot of data
is transferred over the network, we only transfer the data that is really needed.
Finally, the computed multiset of mappings is stored in HDFS.</p>
        <p>1 SCAN for local mappings: ?article  title  ?title  
2 map inputs
?article=article1  ?title="PigSPARQL"
3 GET bindings: article1  author  ?author
?article=article2  ?title="RDFPath"
3 GET bindings: article2  author  ?author
3 
3 
4 map outputs
?article=article1  ?title="PigSPARQL"  ?author=Alex
?article=article1  ?title="PigSPARQL"  ?author=Martin
?article=article2  ?title="RDFPath"  ?author=Martin
?article=article2  ?title="RDFPath"  ?author=Alex
2
Chains of concatenated triple patterns require some slight modifications to the
previously described base case. To compute a query of at least three triple
patterns we have to process several joins successively, e.g. p1 p2 p3. The
processing of the first two patterns p1 p2 correspond to the base case and the
results are stored in HDFS. The additional triple pattern p3 is then joined with
the mappings for p1 p2. To this end, an additional map-phase (without any
intermediate shuffle or reduce phase) is initialized with the previously computed
mappings as input. Since these mappings reside in HDFS, they are retrieved
locally in parallel such that the map function gets invoked for each mapping
µ2 for p1 p2. The compatible mappings for p3 are retrieved using the same
strategy as for the base case, i.e. µ2 is used to substitute the shared variables in
p3 and compatible mappings are retrieved following the triple pattern mapping
outlined in Table 2. Algorithm 1 outlines one iteration of the MAPSIN join. The
input for the map function contains either a mapping for the first triple pattern
(via distributed table scan) or a mapping for previously joined triple patterns
(loaded from HDFS).</p>
        <p>Algorithm 1: MAPSIN join: map(inKey, inValue)</p>
        <p>results ← HBase.GET(pn+1) // table index lookup using unsubstituted pattern
4.3</p>
      </sec>
      <sec id="sec-4-2">
        <title>Multiway Join Optimization</title>
        <p>Instead of processing concatenated triple patterns successively as a sequence
of two-way joins, some basic graph patterns allow to apply a multiway join
approach to process joins between several concatenated triple patterns at once
in a single map phase. This is typically the case for star pattern queries where
triple patterns share the same join variable. The SPARQL query introduced in
Section 2.1 is an example for such a query as all triple patterns share the same
join variable ?article. This query can be processed by a three-way join in a single
map-phase instead of two consecutive two-way joins.</p>
        <p>We extended our approach to support this multiway join optimization. Again,
the first triple pattern p1 is processed using a distributed table scan as input for
the map phase. But instead of using a sequence of n map phases to compute p1
p2 ... pn+1 we use a single map phase thus saving n−1 MapReduce iterations.
Hence, the map function needs to retrieve all mappings for p2, p3, ..., pn+1 that
are compatible to the input mapping µ1 for p1. Therefore, the join variable ?vs
in p2, p3, ..., pn+1 (e.g. ?article) is substituted with the corresponding variable
binding µ1(?vs). The substituted triple patterns ps2ub, ps3ub, ..., psnu+b1 are then used
to retrieve the compatible mappings using HBase table lookups. This general case
of the MAPSIN multiway join is outlined in Algorithm 2.</p>
        <p>
          The performance of MAPSIN joins strongly correlates with the number of
index lookups in HBase. Hence, minimizing the number of lookups is a crucial
point for optimization. In many situations, it is possible to reduce the number of
requests by leveraging the RDF schema design for HBase outlined in Section 3.
If the join variable for all triple patterns is always on subject or always on object
position, then all mappings for p2, p3, ..., pn+1 that are compatible to the input
mapping µ1 for p1 are stored in the same HBase table row of Ts po or To ps,
respectively, making it possible to use a single instead of n subsequent table
lookups. Hence, all compatible mappings can be retrieved at once thus saving
n − 1 lookups for each invocation of the map function. Due to space limitations
the corresponding algorithm for this optimized case can be found in the technical
report version of this paper [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ].
5
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Evaluation</title>
      <p>
        The evaluation was performed on a cluster of 10 Dell PowerEdge R200 servers
equipped with a Dual Core 3.16 GHz CPU, 8 GB RAM, 3 TB disk space and
connected via gigabit network. The software installation includes Hadoop 0.20.2,
HBase 0.90.4 and Java 1.6.0 update 26.
We used the well-known Lehigh University Benchmark (LUBM) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] as the
queries can easily be formulated as SPARQL basic graph patterns. Furthermore,
we also considered the SPARQL-specific SP2Bench Performance Benchmark [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ].
However, because most of the SP2Bench queries are rather complex queries that
use all different kinds of SPARQL 1.0 operators, we only evaluated some of the
queries as the focus of our work is the efficient computation of joins, i.e. basic
graph patterns. Both benchmarks offer synthetic data generators that can be
used to generate arbitrary large datasets. For SP2Bench we generated datasets
from 200 million up to 1000 million triples. For LUBM we generated datasets
from 1000 up to 3000 universities and used the WebPIE inference engine for
Hadoop [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] to pre-compute the transitive closure. The loading times for both
tables Ts po and To ps as well as all datasets are listed in Table 3.
      </p>
      <p>
        The goal of our approach was to optimize MapReduce based join
computation for selective queries. Therefore, we compared our MAPSIN join approach
with the reduce-side join based query execution in PigSPARQL [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], a SPARQL
1.0 engine built on top of Pig. Pig is an Apache top-level project developed
by Yahoo! Research that offers a high-level language for the analysis of very
large datasets with Hadoop MapReduce. The crucial point for this choice was
the sophisticated and efficient reduce-side join implementation of Pig [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] that
incorporates sampling and hash join techniques which makes it a challenging
candidate for comparison. We illustrate the performance comparison of PigSPARQL
and MAPSIN for some selected LUBM queries that represent the different query
types in Figure 3. Our proof-of-concept implementation is currently limited to
a maximum number of two join variables as the goal was to demonstrate the
feasibility of the approach for selective queries rather than supporting all
possible BGP constellations. For detailed comparison, the runtimes of all executed
queries are listed in Table 4.
      </p>
      <p>LUBM queries Q1, Q3, Q5, Q11, Q13 as well as SP2Bench query Q3a
demonstrate the base case with a single join between two triple patterns (cf. Figure 3a).
For the LUBM queries, MAPSIN joins performed 8 to 13 times faster compared
to the reduce-side joins of PigSPARQL. Even for the less selective SP2Bench
query, our MAPSIN join required only one third of the PigSPARQL execution
time. Furthermore, the performance gain increases with the size of the dataset
for both LUBM and SP2Bench.
step and illustrate primarily the advantages of the distributed HBase table scan
compared to the HDFS storage access of PigSPARQL. Improvements are still
present but less significant, resulting in an up to 5 times faster query execution.</p>
      <p>An open issue of the evaluation remains the actual data flow between HBase
and MapReduce as HBase is like a black box where data distribution and
partitioning is handled by the system automatically. Since data locality is an
important aspect of distributed systems, it is crucial to examine additional measures
for future optimizations.</p>
      <p>Overall, the MAPSIN join approach clearly outperforms the reduce-side join
based query execution for selective queries. Both approaches reveal a linear
scaling behavior with the input size but the slope of the MAPSIN join is much
smaller. Especially for LUBM queries, MAPSIN joins outperform reduce-side
joins by an order of magnitude as these queries are generally rather selective.
Moreover, the application of the multiway join optimization results in a further
significant improvement of the total query execution times.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Related Work</title>
      <p>
        Single machine RDF systems like Sesame [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and Jena [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ] are widely-used
since they are user-friendly and perform well for small and medium sized RDF
datasets. RDF-3X [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] is considered one of the fastest single machine RDF
systems in terms of query performance that vastly outperforms previous single
machine systems but performance degrades for queries with unbound objects and
low selectivity factor [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Furthermore, as the amount of RDF data continues to
grow, it will become more and more difficult to store entire datasets on a single
machine due to the limited scaling capabilities [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. One possible approach are
specialized clustered RDF systems like OWLIM [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], YARS2 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] or 4store [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
However, these systems require a dedicated infrastructure and pose additional
installation and management overhead. In contrast, our approach builds upon
the idea to use existing infrastructures that are well-known and widely used. As
we do not require any changes to Hadoop and HBase at all, it is possible to use
any existing Hadoop cluster or cloud service (e.g. Amazon EC2) out of the box.
      </p>
      <p>
        There is a large body of work dealing with join processing in MapReduce
considering various aspects and application fields [
        <xref ref-type="bibr" rid="ref18 ref20 ref22 ref25 ref32 ref4 ref5">4,5,18,20,22,25,32</xref>
        ]. In
Section 2.2 we briefly outlined the advantages and drawbacks of the general-purpose
reduce-side and map-side (merge) join approaches in MapReduce. In addition to
these general-purpose approaches there are several proposals focusing on certain
join types or optimizations of existing join techniques for particular application
fields. In [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] the authors discussed how to process arbitrary joins (theta joins)
using MapReduce, whereas [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] focuses on optimizing multiway joins. However,
in contrast to our MAPSIN join, both approaches process the join in the reduce
phase including a costly data shuffle phase. Map-Reduce-Merge [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] describes a
modified MapReduce workflow by adding a merge phase after the reduce phase,
whereas Map-Join-Reduce [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] proposes a join phase in between the map and
reduce phase. Both techniques attempt to improve the support for joins in
MapReduce but require profound modifications to the MapReduce framework. In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
the authors present non-invasive index and join techniques for SQL processing
in MapReduce that also reduce the amount of shuffled data at the cost of an
additional co-partitioning and indexing phase at load time. However, the schema
and workload is assumed to be known in advance which is typically feasible for
relational data but does not hold for RDF in general.
      </p>
      <p>
        HadoopDB [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is a hybrid of MapReduce and DBMS where MapReduce is
the communication layer above multiple single node DBMS. The authors in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]
adopt this hybrid approach for the semantic web using RDF-3X. However, the
initial graph partitioning is done on a single machine and has to be repeated if
the dataset is updated or the number of machines in the cluster change. As we
use HBase as underlying storage layer, additional machines can be plugged in
seamlessly and updates are possible without having to reload the entire dataset.
      </p>
      <p>
        HadoopRDF [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] is a MapReduce based RDF system that stores data directly
in HDFS and does also not require any changes to the Hadoop framework. It
is able to rebalance automatically when cluster size changes but join processing
is also done in the reduce phase. Our MAPSIN join does not use any shuffle or
reduce phase at all even in consecutive iterations.
7
      </p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>In this paper we introduced the Map-Side Index Nested Loop join (MAPSIN
join) which combines the advantages of NoSQL data stores like HBase with the
well-known and approved distributed processing facilities of MapReduce. In
general, map-side joins are more efficient than reduce-side joins in MapReduce as
there is no expensive data shuffle phase involved. However, current map-side
join approaches suffer from strict preconditions what makes them hard to
apply in general, especially in a sequence of joins. The combination of HBase and
MapReduce allows us to cascade a sequence of MAPSIN joins without having to
sort and repartition the intermediate output for the next iteration. Furthermore,
with the multiway join optimization we can reduce the number of MapReduce
iterations and HBase requests. Using an index to selectively request only those
data that is really needed also saves network bandwidth, making parallel query
execution more efficient. The evaluation with the LUBM and SP2Bench
benchmarks demonstrate the advantages of our approach compared to the commonly
used reduce-side join approach in MapReduce. For selective queries, MAPSIN
join based SPARQL query execution outperforms reduce-side join based
execution by an order of magnitude while scaling very smoothly with the input size.
Lastly, our approach does not require any changes to Hadoop and HBase at all.
Consequently, MAPSIN joins can be run on any existing Hadoop infrastructure
and also on an instance of Amazon’s Elastic Compute Cloud (EC2) without
additional installation or management overhead.</p>
      <p>In our future work, we will investigate alternatives and improvements of the
RDF storage schema for HBase and incorporate MAPSIN joins into PigSPARQL
in a hybrid fashion such that the actual join method is dynamically selected based
on pattern selectivity and statistics gathered at data loading time.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>RDF</given-names>
            <surname>Primer</surname>
          </string-name>
          .
          <source>W3C Recom</source>
          . (
          <year>2004</year>
          ), http://www.w3.org/TR/rdf-primer/
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>SPARQL</given-names>
            <surname>Query</surname>
          </string-name>
          <article-title>Language for RDF</article-title>
          .
          <source>W3C Recom</source>
          . (
          <year>2008</year>
          ), http://www.w3.org/ TR/rdf-sparql-query/
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Abouzeid</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bajda-Pawlikowski</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abadi</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rasin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Silberschatz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>HadoopDB: An Architectural Hybrid of MapReduce and DBMS Technologies for Analytical Workloads</article-title>
          .
          <source>PVLDB</source>
          <volume>2</volume>
          (
          <issue>1</issue>
          ),
          <fpage>922</fpage>
          -
          <lpage>933</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Afrati</surname>
            ,
            <given-names>F.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ullman</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          :
          <article-title>Optimizing Multiway Joins in a Map-Reduce Environment</article-title>
          .
          <source>IEEE Trans. Knowl. Data Eng</source>
          .
          <volume>23</volume>
          (
          <issue>9</issue>
          ),
          <fpage>1282</fpage>
          -
          <lpage>1298</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Blanas</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patel</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ercegovac</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shekita</surname>
            ,
            <given-names>E.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tian</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>A Comparison of Join Algorithms for Log Processing in MapReduce</article-title>
          . In: SIGMOD (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Broekstra</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kampman</surname>
          </string-name>
          , A.,
          <string-name>
            <surname>van Harmelen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema</article-title>
          . In: ISWC (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <issue>7</issue>
          .
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dean</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghemawat</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsieh</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wallach</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burrows</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chandra</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fikes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gruber</surname>
          </string-name>
          , R.:
          <article-title>Bigtable: A Distributed Storage System for Structured Data</article-title>
          .
          <source>ACM Transactions on Computer Systems (TOCS) 26(2)</source>
          ,
          <volume>4</volume>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Dean</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghemawat</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <source>MapReduce: Simplified Data Processing on Large Clusters. Communications of the ACM</source>
          <volume>51</volume>
          (
          <issue>1</issue>
          ),
          <fpage>107</fpage>
          -
          <lpage>113</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dittrich</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Quian´e-Ruiz,
          <string-name>
            <given-names>J.A.</given-names>
            ,
            <surname>Jindal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Kargin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Setty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Schad</surname>
          </string-name>
          , J.: Hadoop++:
          <article-title>Making a Yellow Elephant Run Like a Cheetah (Without It Even Noticing)</article-title>
          .
          <source>PVLDB 3</source>
          (
          <issue>1</issue>
          ),
          <fpage>518</fpage>
          -
          <lpage>529</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Franke</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chebotko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abraham</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brazier</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Distributed Semantic Web Data Management in HBase and MySQL Cluster</article-title>
          .
          <source>In: IEEE International Conference on Cloud Computing (CLOUD)</source>
          . pp.
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gates</surname>
            ,
            <given-names>A.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Natkovich</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chopra</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kamath</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narayanamurthy</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Olston</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reed</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Srinivasan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Srivastava</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Building a High-Level Dataflow System on top of Map-Reduce: The Pig Experience</article-title>
          .
          <source>PVLDB</source>
          <volume>2</volume>
          (
          <issue>2</issue>
          ) (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ghemawat</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gobioff</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Leung</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The Google File System</article-title>
          .
          <source>In: ACM SIGOPS Operating Systems Review</source>
          . vol.
          <volume>37</volume>
          , pp.
          <fpage>29</fpage>
          -
          <lpage>43</lpage>
          . ACM (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heflin</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>LUBM: A Benchmark for OWL Knowledge Base Systems</article-title>
          .
          <source>Web Semantics</source>
          <volume>3</volume>
          (
          <issue>2</issue>
          ) (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Harris</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamb</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shadbolt</surname>
          </string-name>
          , N.:
          <article-title>4store: The Design and Implementation of a Clustered RDF Store</article-title>
          . In: SSWS. pp.
          <fpage>94</fpage>
          -
          <lpage>109</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Harth</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Umbrich</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hogan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Decker</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>YARS2: A Federated Repository for Querying Graph Structured Data from the Web</article-title>
          .
          <source>The Semantic Web</source>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abadi</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <source>Scalable SPARQL Querying of Large RDF Graphs. PVLDB</source>
          <volume>4</volume>
          (
          <issue>11</issue>
          ),
          <fpage>1123</fpage>
          -
          <lpage>1134</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Husain</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McGlothlin</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Masud</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>L.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thuraisingham</surname>
            ,
            <given-names>B.M.</given-names>
          </string-name>
          :
          <article-title>Heuristics-Based Query Processing for Large RDF Graphs Using Cloud Computing</article-title>
          .
          <source>IEEE TKDE 23(9)</source>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tung</surname>
            ,
            <given-names>A.K.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
          </string-name>
          , G.:
          <article-title>Map-Join-Reduce: Toward Scalable and Efficient Data Analysis on Large Clusters</article-title>
          .
          <source>IEEE TKDE 23(9)</source>
          ,
          <fpage>1299</fpage>
          -
          <lpage>1311</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Kiryakov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ognyanov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <string-name>
            <surname>OWLIM - A Pragmatic Semantic</surname>
          </string-name>
          <article-title>Repository for OWL</article-title>
          .
          <source>In: WISE Workshops</source>
          . pp.
          <fpage>182</fpage>
          -
          <lpage>192</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>Y.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Choi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>Y.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moon</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Parallel Data Processing with MapReduce: A Survey</article-title>
          .
          <source>SIGMOD Record</source>
          <volume>40</volume>
          (
          <issue>4</issue>
          ),
          <fpage>11</fpage>
          -
          <lpage>20</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Neumann</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weikum</surname>
          </string-name>
          , G.:
          <article-title>RDF-3X: a RISC-style engine for RDF</article-title>
          .
          <source>PVLDB</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <fpage>647</fpage>
          -
          <lpage>659</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Okcan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Riedewald</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Processing Theta-Joins using MapReduce</article-title>
          .
          <source>In: SIGMOD Conference</source>
          . pp.
          <fpage>949</fpage>
          -
          <lpage>960</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23. P´erez, J.,
          <string-name>
            <surname>Arenas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gutierrez</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Semantics and Complexity of SPARQL</article-title>
          .
          <source>ACM Transactions on Database Systems (TODS) 34(3)</source>
          ,
          <volume>16</volume>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Przyjaciel-Zablocki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Sch¨atzle,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Hornung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Dorner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Lausen</surname>
          </string-name>
          , G.:
          <article-title>Cascading Map-Side Joins over HBase for Scalable Join Processing</article-title>
          .
          <source>Technical Report. CoRR abs/1206</source>
          .6293 (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Przyjaciel-Zablocki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Sch¨atzle,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Hornung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Lausen</surname>
          </string-name>
          , G.:
          <article-title>RDFPath: Path Query Processing on Large RDF Graphs with MapReduce</article-title>
          .
          <source>In: ESWC Workshops</source>
          . pp.
          <fpage>50</fpage>
          -
          <lpage>64</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26. Sch¨atzle,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Przyjaciel-Zablocki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Lausen</surname>
          </string-name>
          , G.:
          <article-title>PigSPARQL: Mapping SPARQL to Pig Latin</article-title>
          .
          <source>In: Proceedings of the International Workshop on Semantic Web Information Management (SWIM)</source>
          . pp.
          <volume>4</volume>
          :
          <fpage>1</fpage>
          -
          <issue>4</issue>
          :
          <issue>8</issue>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hornung</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lausen</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pinkel</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>SP2Bench: A SPARQL Performance Benchmark</article-title>
          . In: ICDE. pp.
          <fpage>222</fpage>
          -
          <lpage>233</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jin</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          :
          <article-title>Scalable RDF Store Based on HBase and MapReduce</article-title>
          . In: ICACTE. vol.
          <volume>1</volume>
          , pp.
          <fpage>633</fpage>
          -
          <lpage>636</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Urbani</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kotoulas</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maassen</surname>
          </string-name>
          , J., van
          <string-name>
            <surname>Harmelen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bal</surname>
          </string-name>
          , H.:
          <article-title>OWL Reasoning with WebPIE: Calculating the Closure of 100 Billion Triples</article-title>
          . In: ESWC. pp.
          <fpage>213</fpage>
          -
          <lpage>227</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Weiss</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karras</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Hexastore: Sextuple Indexing for Semantic Web Data Management</article-title>
          .
          <source>PVLDB</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1008</fpage>
          -
          <lpage>1019</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Wilkinson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sayers</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuno</surname>
            ,
            <given-names>H.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reynolds</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Efficient RDF Storage and Retrieval in Jena2</article-title>
          . In: SWDB. pp.
          <fpage>131</fpage>
          -
          <lpage>150</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>H.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dasdan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsiao</surname>
            ,
            <given-names>R.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jr.</surname>
            ,
            <given-names>D.S.P.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Map-Reduce-Merge</surname>
          </string-name>
          :
          <article-title>Simplified Relational Data Processing on Large Clusters</article-title>
          . In: SIGMOD (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>