<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Scalable Semantic Version Control for Linked Data Management</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Claudius Hauptmann</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Brocco</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wolfgang Worndl</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Technische Universitat Munchen</institution>
          ,
          <addr-line>85748 Garching</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Linked Data is the Semantic Web's established standard for publishing and interlinking data. When authors collaborate on a data set, distribution and tracking of changes are crucial aspects. Several approaches for version control were presented in this area each focusing on di erent aspects and bounded to di erent limitations. In this paper we present an approach for semantic version control for dynamic Linked Data based on a delta-based storage strategy and on-demand reconstruction of historic versions. The approach is intended for large data sets and supports targeted and cross-version queries. The approach was implemented prototypically on top of a scalable triple store and tested with a generated data set based on parts of DBpedia with several million triples and thousands of versions.</p>
      </abstract>
      <kwd-group>
        <kwd>Linked Data</kwd>
        <kwd>Version Control</kwd>
        <kwd>SPARQL</kwd>
        <kwd>Revision</kwd>
        <kwd>Query</kwd>
        <kwd>Provenance</kwd>
        <kwd>Named Graphs</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Linked Data provides essential mechanisms to e ciently interlink and integrate
data using the Resource Description Framework (RDF) as base model [
        <xref ref-type="bibr" rid="ref5 ref9">5, 9</xref>
        ]. RDF
stores information as directed graph. Edges are de ned by triples consisting
of subject, predicate and object and nodes are de ned implicitly through the
edges and are referenced by URIs. Edges can be grouped to named graphs to
facilitate the administration or to store additional information by assigning a
context, which transforms triples to quads. SPARQL (SPARQL Protocol And
RDF Query Language) can be used as query language for Linked Data using
pattern matching, ltering, aggregation and even distributed query execution to
query several data sources at once.
      </p>
      <p>
        A missing feature not covered by the Linked Data standard so far is version
control. Especially when several authors are involved (which is obviously the
case for the data amounts addressed by Linked Data) tracking and distribution
of changes and rolling back to previous revisions are crucial aspects for any kind
of data management [
        <xref ref-type="bibr" rid="ref5 ref6 ref9">5, 9, 6</xref>
        ]. Recent research projects created several approaches
for version control of Linked Data with focus on di erent aspects. They cover
versioning of data of OWL ontologies, lightweight RDFS ontologies and Linked
Data, support di erent work ows and enable knowledge workers to run di erent
query types on versioned data. Some are limited in scalability regarding the
number of triples, the number of versions or because of space e ciency. Some
solutions hide the version information from the Linked Data layer preventing
the access of version information by SPARQL queries.
      </p>
      <p>
        Since Linked Data is designed to handle and publish large data sets we focus
on scalable semantic version control. This comes with new limitations, especially
the variety of query types that can be handled. Because of the amounts of data
we want to handle, we use a delta-based strategy. Triples of historic versions
that are used for query evaluation are reconstructed on-demand. This comes
at the cost, that global queries (which require the whole dataset for a speci c
version to be constructed) can hardly be supported. The construction of versions
occurs very frequently, thus the version construction performance is the critical
factor[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        The goal of this paper is to propose an approach for scalable semantic version
control for dynamic Linked Data based on partial on-demand reconstruction of
historic versions. We focus on query optimization for targeted queries on random
versions stored in a delta-based, distributed storage. Like Graube et. al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] we
want the version control information itself to be accessible by SPARQL queries.
      </p>
      <p>We prototypically implemented a semantic version control system on top of
a scalable triple store and analyzed query execution plans and their performance
using several million triples. We optimized queries accessing historical
information and added a module to the query engine that constructs and caches relevant
triples on-demand. The implemented approach was evaluated in terms of space
e ciency and query performance.</p>
      <p>Section 2 of this paper shows related work, Section 3 describes our approach
and Section 4 a performance test. Section 5 closes with a summary and an
outlook to future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Stefanidis et. al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] discuss storage strategies and recommend hybrid
strategies over pure version-based or delta-based strategies. They distinguish between
modern, historical and cross-version queries, global vs. targeted queries and
version-centered vs. delta-centered queries. Graube et. al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] show an approach
for semantic version control targeting medium sized data sets supporting both
version-speci c queries and cross-version queries. They use a delta-based and a
version-based store holding the latest version. By applying changesets, versions
are temporarily adapted on the y to the version speci ed by the query. The
approach is limited by the number of changes that can be applied to an index on
the y. Tzitzikas et al. [
        <xref ref-type="bibr" rid="ref12 ref8">12, 8</xref>
        ] develop storage index structures based on partial
orders that store several overlapping versions of RDF datasets. Im, Lee and Kim
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] argue that this approach is not scalable if a query needs to fully construct
a speci c version. They use a relational database to store deltas separately in
a delete and an insert table. They construct logical versions on the y using a
SQL statement that joins the original version and the relevant delta tables. To
reduce overhead they introduce aggregated deltas. Im et. al [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] use a hypergraph
that exploits hyperedges and vertices for RDF version management. In contrast
to Tzitzikas et al. [
        <xref ref-type="bibr" rid="ref12 ref8">12, 8</xref>
        ] versions do not have to be constructed per query since
the hypergraph allows to store the relations between edges and versions. The
approach is limited in scalability since a combination of several million edges
and several thousand versions is demanding in terms of space and the addition
of a version will also be time consuming. Auer and Herre [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] focus on ontology
evolution by tracking and classifying changes made to RDF stores. Cassidy and
Ballantine [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] use a delta-based storage and a working storage backed by
relational databases on both server and client that synchronize changes. Sande et
al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] store deltas in a quad-store. Versions can be queried by SPARQL through
virtual graphs. An interpretation layer transforms SPARQL queries to
reconstruct the versions in the quad store on the y to a triplestore. SemVersion
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] is a RDF-based ontology versioning system supporting historic queries. The
information about versions is hidden from SPARQL queries and the space
overhead limits scalability. Lee and Conolly [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] focus on comparing and updating
RDF graphs by generating sets of di erences and propose an update ontology
for patches of RDF les. We are looking for a solution for large data sets that
recreates historic versions on-demand and therefore accept limitations regarding
global queries [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] which require the whole data set for a version to be
constructed. Our focus is di erent in contrast to previous work and needs its own
optimizations which are discussed and tested in the following sections.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>On-demand Version Reconstruction</title>
      <p>
        Like Cassidy and Ballantine [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and Graube et. al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] we handle versioning
on the level of complete graphs and we model the delta-sets as Linked Data.
Each version of a graph is referenced by a commit and, except the rst one,
each commit references (ref ) a previous commit (prev). Since several commits
can reference the same previous commit it is possible to work on branches in
parallel. Merging two branches is done by creating a commit that is referencing
two previous commits. All edges are de ned by triples and as these triples belong
to the same graph, we can store them as quads and use the context information
as identi er for the triple. The commits either add or remove triples stored as
edges that reference these triples having a predicate of type add or delete. Merge
commits store only those triple modi cations that are ambiguous and would lead
to con icts. We store branches and tags as edges that link a branch URI to their
current commit. The branches and tags are referenced by an edge connecting
them to a graph. Figure 1 shows an example instance of this model.
      </p>
      <p>
        To access historic versions we use virtual graphs that specify the version we
want to use in our query. In the following targeted query [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] we use data in a
branch called branch1 to load triples of the graph http://graph1 by accessing
the virtual graph http://graph1/branches/branch1 :
      </p>
      <p>As delta-based storages store only di erences between versions, we cannot
run this query directly. One way solving this issue would be to reconstruct the
data and to run the query afterwards. Another way is to adapt the query to
the actual data structure by rewriting the original query in order to use the
delta-based storage:</p>
      <p>
        Like predicted by Sande et. al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], we measured that this query is very time
consuming because of an computational overhead for this type of queries. First,
most triple stores reconstruct the whole desired version in the rst step and
then run the original query that in fact reuses only small parts of this very large
intermediate result. We solved this by giving query hints to the query engine.
Second, the arbitrary path length operators that traverse the commit graph (like
versioning:hasPrevious* ) are executed very often - once per triple per possible
solution, to check whether the binding set is part of the version. Third, the
associations between triples and versions are not cached. This is especially useful
when joins are used in queries and a triple is matched against other triples several
times.
      </p>
      <p>
        Once per query we load the commit graph into an in-memory index which
takes few seconds for 28,000 commits. Instead of using arbitrary path length
operators we create and use our indexes to check whether the triples used in
possible results are in fact part of the speci ed version and the result should be
returned. Query engines of scalable triple stores work pipelined [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. While the
rst iterator is performing an index scan it sends rst results to the following
iterator (e.g. a hash join). We work on chunks of up to 100 triple identi ers
for which we load the associations to commits into an index. We traverse the
commit graph once per chunk and select and return the triple identi ers that
are part of the speci ed version.
      </p>
      <p>
        The proposed approach was prototypically implemented and is based on
Blazegraph (formerly Bigdata) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a scalable, distributed triple store which
implements the Sesame API [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Currently we did not automate the rewriting
process. To access historical data we implemented a custom service that is called
from queries via the SERVICE keyword as a virtual service. Since Blazegraph
contains several virtual services that extend the functionality of SPARQL, this
seemed to be an appropriate way. The service gets the version identi er from the
original query as triple patterns inside the service query and the triple identi ers
via bindings (Bindings are comparable to query parameters in SQL). For each
triple pattern we check the version association and use a hash table to cache this
information. If the triple is part of the version we return that triple identi er
as a binding to the original query that called the virtual service. The rewritten
query from the last section with our approach changes into:
      </p>
    </sec>
    <sec id="sec-4">
      <title>Performance Tests</title>
      <p>
        Widely used metrics for evaluation of version control concepts are response time
and storage space consumption [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. We measured the response time for complete
queries as well as the response time for single steps of our approach. We ran
each query 100 times and calculated average durations. First, we measured the
duration for the generation of the commit graph index which is built once per
query. Second, we measured the duration for the generation of an index storing
the relationships between commits and triple identi ers. Third, the duration and
the average path length (average number of edges traversed per triple identi er)
for the graph traversal that checks which triples are part of the speci ed version.
The triple store we used called the virtual service we created once per chunk of
100 triple identi ers. We measured the storage space consumption by cumulating
the le sizes of the index segments rather than calculating the number of triples.
As dataset we used the English mapping-based types of DBpedia which
contains 28,031,852 triples (release 2014). We generated 4 delta-based datasets with
100, 1000, 10,000 and 100,000 triples per commit. The changes were distributed
equally to the commits and the history consisted of a single timeline with one
branch. We also ran the test queries on a triple store which contained the latest
version without any version control as baseline. As test queries we loaded the
list of types assigned to Slovenia and the list of instances of type country from
the latest commit which contains all triples:
SELECT * WHERE { &lt;http://dbpedia.org/resource/Slovenia&gt; ?p ?o }
SELECT * WHERE { ?s ?p &lt;http://schema.org/Country&gt; }
The results show that we can run targeted queries on a delta-based storage
with on-demand creation of historic versions. The rst test query uses an index
of the version-based storage of the baseline and is hundred times faster than
a delta-based storage. In the second test query a version-based storage would
be four times faster than a delta-based storage. The disk space consumption is
about four times higher than without version control. Our approach is limited
by the number of versions that increase the time spent for graph traversal and
by the size of the results or intermediate results. Global queries (like looking for
the average number of friends [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]) will be faster on a version-based approach.
If these query types are important, a version-based approach is preferrable over
a delta-based approach. If the number of versions used for global queries is
limited, a hybrid approach could be used or speci c versions could be extracted
into additional triple stores. We did not analyze the behaviour of the approach
under a more realistic history containing several branches, merges and several
changes on the same value (e.g. same subject and predicate) as well as more
complex queries with several graph patterns.
5
      </p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and Outlook</title>
      <p>In this paper we present an approach for scalable semantic version control for
Linked Data based on a delta-based storage and on-demand reconstruction of
historic versions. Versions are handled on a graph level and random versions of
graphs can be accessed transparently within SPARQL queries through virtual
graphs. We showed that targeted queries on random versions that reconstruct
necessary parts of the datasets on-demand are possible. The optimization we
propose is based on the query execution order, using in-memory indexes and
caching of intermediate results, that are frequently used within a query.</p>
      <p>
        There are several aspects we did not analyze yet. The index for a graph
of commits can be cached and reused for several queries to the same graph
which would save a remarkable amount of time. In our implementation we did
not operate on the internal identi ers or the storage engine but on the URIs
which have to be loaded from an additional index which leaves also room for
optimization. BlazeGraph also has an optimized storage option for statement
identi ers that might save space for the delta-based storage. The size of the
indexes that have to be traversed could also be reduced by a hybrid storage
strategy. To automate the creation of the materialized versions for a hybrid
approach a cost model is necessary to decide which versions to create. This idea
is comparable to lazy materialization of indexes for relational databases [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
We did not implement or evaluate the creation of merge commits which involves
checking for con icts. Yet, we focused on querying random versions of graphs in
delta-based storages but did not propose an approach to use SPARQL update
queries without a materialized version that has change listeners attached. This
is also an important feature that needs further research.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Auer</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Herre</surname>
          </string-name>
          , H.:
          <article-title>A Versioning and Evolution Framework for RDF Knowledge Bases</article-title>
          . In: Virbitskaite,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Voronkov</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds.) 6th Intl. Andrei Ershov Memorial Conference,
          <string-name>
            <surname>PSI</surname>
          </string-name>
          <year>2006</year>
          . pp.
          <volume>55</volume>
          {
          <fpage>69</fpage>
          . Springer Berlin Heidelberg, Novosibirsk (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Berners-Lee</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Connolly</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Delta: an ontology for the distribution of di erences between RDF graphs (</article-title>
          <year>2001</year>
          ), http://www.w3.org/DesignIssues/Di
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Broekstra</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kampman</surname>
          </string-name>
          , A.,
          <string-name>
            <surname>van Harmelen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema</article-title>
          .
          <source>In: Prcoeedings of the First International Semantic Web Conference</source>
          . Springer Berlin Heidelberg, Sardinia (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Cassidy</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ballantine</surname>
          </string-name>
          , J.:
          <article-title>Version Control for RDF Triple Stores</article-title>
          . In: Filipe,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Shishkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Helfert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Maciaszek</surname>
          </string-name>
          ,
          <string-name>
            <surname>L</surname>
          </string-name>
          . (eds.)
          <source>Proceedings of the 2nd International Conference on Software and Data Technologies</source>
          . pp.
          <volume>5</volume>
          {
          <fpage>12</fpage>
          . Springer-Verlag Berlin Heidelberg, Barcelona (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Graube</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hensel</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Urbas</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>R43ples: Revisions for Triples - An Approach for Version Control in the Semantic Web</article-title>
          . In: Knuth,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Kontokostas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Sack</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          <source>(eds.) 1st Workshop on Linked Data Quality co-located with 10th International Conference on Semantic Systems. CEUR Workshop Proceedings</source>
          , Leipzig (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Im</surname>
            ,
            <given-names>D.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>S.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>H.J.: A Version</given-names>
          </string-name>
          <string-name>
            <surname>Management</surname>
          </string-name>
          <article-title>Framework for RDF Triple Stores</article-title>
          .
          <source>International Journal of Software Engineering and Knowledge Engineering</source>
          <volume>22</volume>
          (
          <issue>01</issue>
          ),
          <volume>85</volume>
          {106 (Feb
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Im</surname>
            ,
            <given-names>D.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zong</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>E.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yun</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>H.G.</given-names>
          </string-name>
          :
          <article-title>A hypergraph-based storage policy for RDF version management system</article-title>
          .
          <source>6th International Conference on Ubiquitous Information Management and Communication - ICUIMC '12</source>
          p.
          <volume>1</volume>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Psaraki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tzitzikas</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>CPOI : A Compact Method to Archive Versioned RDF Triple-Sets (Ic</article-title>
          ),
          <volume>1</volume>
          {
          <fpage>25</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Sande</surname>
            ,
            <given-names>M.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Colpaert</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verborgh</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coppens</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mannens</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Walle</surname>
            ,
            <given-names>R.V.D.</given-names>
          </string-name>
          :
          <string-name>
            <surname>R</surname>
          </string-name>
          &amp;
          <article-title>Wbase : Git for triples</article-title>
          . In: Bizer,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Heath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Berners-Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Hausenblas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Auer</surname>
          </string-name>
          , S. (eds.)
          <source>Proceedings of the WWW2013 Workshop on Linked Data on the Web</source>
          . pp.
          <volume>1</volume>
          {
          <issue>5</issue>
          . CEUR Workshop Proceedings, Rio de Janeiro (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Stefanidis</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chrysakis</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flouris</surname>
          </string-name>
          , G.:
          <article-title>On Designing Archiving Policies for Evolving RDF Datasets on the Web pp</article-title>
          .
          <volume>43</volume>
          {
          <issue>56</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Personick</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cutcher</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The Bigdata RDF Graph Database</article-title>
          . In: Wagner,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Hose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Schenkel</surname>
          </string-name>
          ,
          <string-name>
            <surname>R</surname>
          </string-name>
          . (eds.)
          <source>Linked Data Management, chap. 8</source>
          , pp.
          <volume>193</volume>
          {
          <fpage>237</fpage>
          . CRC Press, Boca Raton (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Tzitzikas</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Theoharis</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andreou</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>On storage policies for Semantic web repositories that support version</article-title>
          . In: Bechhofer,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Hauswirth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Ho</surname>
          </string-name>
          <string-name>
            <given-names>mann</given-names>
            , J.,
            <surname>Koubarakis</surname>
          </string-name>
          , M. (eds.) 5th
          <source>European Semantic Web Conference</source>
          . pp.
          <volume>705</volume>
          {
          <fpage>719</fpage>
          . Springer Berlin Heidelberg, Tenerife (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. Volkel,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Groza</surname>
          </string-name>
          , T.:
          <article-title>SemVersion: An RDF-based ontology versioning system</article-title>
          .
          <source>In: Proc. 5th IADIS International Conference on WWW/Internet</source>
          . IADIS Press (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tiropanis</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>H.C.</given-names>
          </string-name>
          : LHD:
          <article-title>Optimising Linked Data Query Processing Using Parallelisation</article-title>
          . In: Bizer,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Heath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Berners-Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Hausenblas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Auer</surname>
          </string-name>
          , S. (eds.)
          <source>Proceedings of the WWW2013 Workshop on Linked Data on the Web</source>
          . vol.
          <volume>996</volume>
          . CEUR Workshop Proceedings, Rio de Janeiro (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larson</surname>
            ,
            <given-names>P.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elmongui</surname>
            ,
            <given-names>H.G.</given-names>
          </string-name>
          :
          <article-title>Lazy Maintenance of Materialized Views</article-title>
          .
          <source>In: Proceedings of the 33rd international conference on Very large data bases</source>
          . pp.
          <volume>231</volume>
          {
          <fpage>242</fpage>
          . VLDB Endowment, Vienna (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>