<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the need to include functional testing in RDF stream engine benchmarks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dipartimento di Elettronica</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>P.za L. Da Vinci</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Milano - Italy daniele.dellaglio@mail.polimi.it</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>balduini@elet.polimi.it</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>emanuele.dellavalle@polimi.it</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>How to improve existing RDF stream engine benchmarks with functional tests that verify result correctness before stressing the compared systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Since 2008 different research groups have studied the problem of query answering
over streams of RDF data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Several RDF stream engines have been designed
and developed, such as C-SPARQL [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], SPARQLstream [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and CQELS [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The
following step, started two years ago, has been the comparison and the
evaluation of these systems. To the best of our knowledge, this effort produced two
benchmarks: SRBench [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and LSBench [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>One common problem of those two benchmarks is that they do not consider
the output of the RDF stream engines. SRBench performs only functional tests in
order to determine the features supported by the RDF stream engines; LSBench
does not verify the correctness of the answers: it limits the analysis to the number
of outputs, but this is not enough to determine if the answer is correct. In other
words, both benchmarks make two assumptions: 1) the tested systems work
correctly, and 2) the tested systems have the same operational semantics. If
any of the two assumptions does not hold, the benchmarks supply misleading
information on the performance of the compared RDF stream engines.</p>
      <p>In this paper, we report our experience about the analysis of three RDF
stream engines: C-SPARQL, CQELS and SPARQLstream1. We show that the two
assumptions do not always hold, and consequently the results of the benchmark
are not enough to make a comparison of the systems. We also supply guidelines
for designing an improved benchmark for RDF stream engines: we propose the
addition of a set of functional tests to verify the correctness of the results, in order
to obtain more accurate benchmarks. In particular, we provide the following
contributions:
– considerations on benchmark principles: in order to the define a benchmark
for RDF stream engines it is necessary to deeply understand the operational
semantics of these systems (see bold-faced text in Section 3);
1 We used the last version available at March 14 on the public Web sites. For more
information visit http://www.streamreasoning.org/bersys2013/
– propose guidelines to define functional tests to verify that the RDF stream
engines behave correctly (see bold-faced text in Section 4);
– a discussion on the results proposed by SRBench and LSBench.</p>
      <p>In Section 2, we briefly introduce the RDF stream engines and the existent
benchmarks for evaluating their performance. Then, in Section 3, we focus on the
models at the basis of the RDF stream engines, explaining why their operational
semantics should be well-defined, while in 4, we discuss the need to verify that
the system implements correctly the operational semantics defined by its model.
Finally, Section 5 concludes with considerations about the tests and guide lines
for the improvements of the RDF stream benchmarks.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related works</title>
      <p>In the following, we provide the background information: we first briefly describe
the behaviour of a generic RDF stream engines, then we provide information
about the two benchmarks for these systems.
2.1</p>
      <sec id="sec-2-1">
        <title>RDF stream engines</title>
        <p>In its general definition, a RDF stream engine is a system able to continuously
answer a query over a RDF stream. A RDF stream is an infinite sequence of
timestamped RDF triples. A timestamped RDF triple is a RDF triple
annotated with a time instant (also known as application timestamp); we represent
a timestamped RDF triple as &lt;s p o&gt;:[τ ], where &lt;s p o&gt; is a well-defined RDF
triple and τ is a natural number indicating the application time.</p>
        <p>
          RDF stream engines are a particular case of stream engines; in these systems
the input is a generic stream, composed by a sequence of timestamped tuples [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
To execute a query over infinite sequences of elements, stream engines usually
follow a three-steps approach: 1) they transform the stream in a relation through
a stream-to-relation (S2R) operator; 2) they execute a relation-to-relation
transformation through a R2R operator (e.g., a query) and 3) they produce a output
stream through a relation-to-stream (R2S) operator.
        </p>
        <p>One of the most used families of stream-to-relation operators is sliding
windows: they allow to extract a continue portion of the stream. For the sake of
brevity, we report only the definition of time-based sliding window. A
timebased sliding window has an associate time range [to, tc) (known as scope) and it
contains all the tuples of the stream having application time stamp in the scope.
The scope is regularly updated to consider different blocks of the stream. The
scope and its update are defined by two parameters, size and slide: the size (ω)
defines the width of the window (ω = tc − to); the slide (β) defines the update
rate and it is the time distance between two consecutive blocks of the window. A
particular case of time-based sliding window is the one where the size equals to
the slide. The name of this window is time-based tumbling window: it partitions
the stream, ensuring that each timestamped tuple of the stream is in one and
only one window block.</p>
        <p>As relation-to-relation operators, those systems usually use well known
transformations, such as algebras: RDF stream engines usually consider SPARQL and
its algebra. It is worth to note that the relations are time-dependent relations
and they changes over time (when the scope is updated).</p>
        <p>
          Finally, relation-to-stream operators are used when the output of the query
engine should be a new stream. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] defines three operators: Rstream (it streams
the whole output relation), Istream (it streams only new tuple added to the
relation) and Dstream (it streams only the tuple that are deleted from the relation).
        </p>
        <p>All the three systems we consider in this work implements time-based sliding
windows and they all support (at least partially) SPARQL 1.1. As
relation-tostream operators, C-SPARQL implements Rstream, CQELS implements Istream,
while SPARQLstream implements all the three operators.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Benchmarks for RDF stream engines</title>
        <p>
          SRBench [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] proposes a suite of test queries and defines metrics to evaluate the
performance of the system. This benchmark contains 17 queries to gather the
properties of the RDF stream engines. The queries vary to ensure that several
features of the target system are tested: queries involving single or multiple input
streams, queries over stream-only data sources or over mixed stream and static
data source, etc. In [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] the authors applied the benchmark on the existent RDF
stream engines, and explained the differences in term of supported
functionalities. Time and memory performance tests, and scalability tests are not targeted
in the actual version of SRBench.
        </p>
        <p>
          LSBench [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] proposes three tests to evaluate the RDF stream engines. The
first one is a functional test to verify the operators and the functionalities
supported by the engines: it is a test similar to the one proposed by SRBench. The
second test is a correctness test: its goal is to verify if the tested RDF stream
engine produces the correct output. Actually this analyses only the number of
produced answers, assuming that the contents of the output are correct. Finally,
the third test is a maximum input throughput test: it has the goal evaluate the
maximum throughput of the RDF stream engines. This test is done increasing
the rate of data in the stream and verifying the number of the answers. For each
test a set of 12 queries is provided; similarly to SRBench, the queries vary to
take into account different features of the engines (single and multiple streams,
presence of static data, etc).
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Compare RDF stream processors knowing their operational semantics</title>
      <p>The first point, we want to focus on is related to the task of determining the
correct answer of a RDF stream engine. Given an input data streams, an input
query and the operational semantics of the engine, it should be possible to
determine which is the expected answer of a RDF stream engine. The specifications
of the RDF stream engines are usually available in the scientific articles and in
the documentation available on their Web sites.</p>
      <p>Let’s consider for example SPARQL engines: for SPARQL query answering
engines, a test suite2 defined by W3C is available to verify the correctness of an
2 Cf. http://www.w3.org/2009/sparql/docs/tests/
implementation through a set of SPARQL queries. Each query is associated to
its expected result. In that case, the SPARQL algebra is enough to explain how
to determine which should be the output of a SPARQL engine implementation
given an input and a query.</p>
      <p>In the case of RDF stream engines, this process is more complex: the inputs
(the stream and the query) and the SPARQL algebra are not enough to
determine one correct answer. We set up the following experiment: we consider an
input stream S1 and an input query Q1 defining a tumbling window W with size
and slide of ten seconds. We registered the query in different (consecutive) time
instants obtaining different outputs (details of the experiment can be found in
Appendix A.1). It is worth noting that all the different results are correct.</p>
      <p>
        This behaviour can be explained through an extended model of stream
engine, SECRET [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. SECRET is a framework to support the task of integrating
stream processes. The authors defined a model to explain the different behaviours
of the S2R operators of the stream engines. In particular, the authors propose
four parameters to define the behaviour of the window operator: the time range
of the active window (the scope, as defined above); the subset of the stream
included of the active window (the content); the conditions until which the input
can be added to the active window (the tick); and finally the conditions until
which the window content can be processed by the query engine (the report).
One important concept define by SECRET is t0. It represents the application
time instant on which the first window starts.
      </p>
      <p>It becomes possible to determine which should be the correct answer given
the inputs by extending operational semantics of a stream engine with the
additional window parameters defined by SECRET. RDF stream benchmark
designers should understand the differences in operational semantics
of the systems they want to target. In particular, different behaviours
of the S2R operators affect the outputs and the performance of those
systems, and the explanation of the benchmarks results without
considering this fact can be misleading.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Adherence of RDF stream engines to their operational semantics</title>
      <p>In the previous sections, we explained that in order to benchmark RDF stream
engines, their operational semantics should be complete and comparable. The
last point we consider is the adherence of the RDF stream engines to their
operational semantics: even if the specification is fine, if the implementation is
wrong, the results of the benchmarking would be misleading. To verify it we
exploit the correctness of the output. It may appear obvious, but, let us restate
that given a RDF stream engine, an input query and a dataset, the result is
correct if it conforms to the one defined by the system specification, better if in
the form of a formal semantics.</p>
      <p>The specifications of the RDF stream engines describe not only the
operational semantics of the systems, but also the requirements at the basis of their
design, the features they should implement and the use case scenarios with suites
of functional and non-functional tests. It can happen that the test suite is badly
defined – it is too strictly related to the use-case to elicitate all the requirements.
Thus, the implemented RDF stream engine does not work correctly in general
cases.</p>
      <p>When designing tests to check that the RDF stream engines correctly
implements the operational semantics, it is important to take into account both the
whole RDF stream engine answering process and the single sub-processes that
compose it. In fact, the query answering process would work in the right way if
the three transformations that composes it are correct.</p>
      <p>Before talking about the tests on the S2R operator (the main focus of this
work), we explain how tests for the R2R and the R2S operators could be build.
Regarding the R2R operator, an initial work is done by the two existent
benchmarks. In their analysis these work check which are the limit of these operators,
verifying which constructs are supported. The next step should be the
verification of the correctness of the result of the R2R operators; the goal can be achieved
exploiting the existent work on SPARQL query answering engines. The input
data should be placed in one window (so the size has to be properly
defined) and the output must be the same of the one defined by the</p>
      <sec id="sec-4-1">
        <title>SPARQL tests.</title>
        <p>The correct implementation of the R2S operator can be verified with the
following test: let’s consider as input stream a generic stream S. The input query
Q uses as S2R operator a time-based sliding window W with size ω &gt; 1 and
slide β = 1; Q uses as the R2R operator the identity transformation. It is easy
to verify that Rstream works correctly if the system outputs the current content
of W ; Istream works correctly if the system outputs the new triples that enter
W ; finally Dstream works correctly if the system outputs the triples that exit
the W .</p>
      </sec>
      <sec id="sec-4-2">
        <title>The tests on S2R operators aim to verify if the windows work</title>
        <p>properly. Tests should verify that the window contains the correct
elements and that its update is performed in the right way. To check
the content of the window, if the tested RDF stream engine supports the Rstream
S2R operator, it is easy to verify the behaviour of the window: it is enough to use
a identity transformation as R2R operator (i.e. copy content of the input relation
in the output relation) and the outputs would be the content of the window. If
the RDF stream engine does not support Rstream, the test described above
is not enough. If the system has the Istream operator and the test query uses
the identity transformation, the output would allow to check only if the tuples
are correctly inserted in the window, but not if they are correctly removed (a
similar consideration is possible for systems supporting Dstream operator only).
This problem can be fixed preparing the test with a different SPARQL query,
exploiting the aggregate functions or the join operator. In Appendix A.2 we
present an example of test of this kind.</p>
        <p>The results of these tests on the systems we considered allowed us to verify
that C-SPARQL and SPARQLstream correctly implements the window, while
CQELS does not remove the elements in the window. In other words, CQELS
ignores the window size and processes the whole data contained in S. It is not
in the scope of this paper to report on the potential improvement in terms of
input throughput of such a CQELS shortcoming; we intend to further inspect
this behaviour checking memory allocation in the long run. A performing RDF
Stream engine should not only maximize input throughput, but also minimize
and keep as stable as possible memory allocation.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and future directions</title>
      <p>In this work, we analyzed the existing RDF stream engines and we showed that
the absence of exhaustive functional testing hides anomalous behaviours of those
systems. It is worth to note that both SRBench and LSBench do not detect S2R
related problems, because they assume that the answer returned by the system
is correct. Additionally, we illustrated how the operational semantics of actual
RDF stream engines are not able to define a unique correct answer given an
input stream and a query.</p>
      <p>
        Both the benchmarks for RDF stream engines take into account different
dimensions in the definition of the experiments, such as: single and multiple input
streams; single and multiple windows over the input stream; optional presence of
static knowledge. However, they should take into account the variety of possible
systems outputs, in order to relate the performance to the correctness of the
results, as it happens in stream engine benchmarks such as the Linear Road
Benchmark [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. It is important to verify the correctness of the system in all
those cases, in order to create an exhaustive set of functional tests.
      </p>
      <p>In our future works, we intend to extend one of the RDF stream benchmark
with correctness tests. This would allow the community to improve the existent
systems and, potentially, to test new systems against a exhaustive set of tests.
A.1</p>
    </sec>
    <sec id="sec-6">
      <title>Experiments</title>
      <sec id="sec-6-1">
        <title>Experiment 1 - Multiple valid results</title>
        <p>
          As scenario we use a simplified version of the scenario considered by CQELS
authors in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]: there are two connected rooms, r1 and r2; each room has a sensor
able to detect the individuals inside, m1 and m2. The stream contains triples in
the form:
indicating that the individual mi has been detected in the room rj.
        </p>
        <p>Let’s set up the stream S1, depicted in Figure 1. S1 contains four triples
describing the detections of the two individuals m1 and m2 in r1 first (respectively
triples S1 and S2) and in r2 then (respectively triples S3 and S4).</p>
        <p>We want to know when the two individuals m1 and m2 are in the same room.
As window, we consider a time-based tumbling window of 10 seconds, the query
returns the room where both m1 and m2 are detected, or empty-set otherwise.
The query, expressed in the C-SPARQL syntax, is available in Listing 1.1.</p>
        <p>Let’s now try to determine the correct answer the system should return.
Looking at the stream S1, the expected answer is to have http://ex.org/r1
first, and http://ex.org/r2 then. But it is worth to note that it is not the
only correct answer: all the results supplied in Table 1 are correct. The different
results are consequence of the window operator: if it starts at t = 0 or t = 1
(windows W0 and W1 in Figure 1) the answer is the one presented above. If the
window starts at t &gt; 1 the output produced would be different: if t = 2 (window
W2) the first answer of the window would be empty, because its content at 123
is S2 only; the second returned answer is http://ex.org/r2, because at 22 W2
contains both S3 and S4.
The results are reported in Table 1. The outputs of the SPARQLstream and
C-SPARQL engines are similar: the only difference is the second output of W6.
It happens because C-SPARQL answers when the content of the window is not
empty; looking at Figure 1 it easy to observe that the second block of W6 is
empty.</p>
        <p>A first observation on the results of CQELS is that when the system produces
a result, it is always at τ = 3 and τ = 15. It happens because CQELS does not
wait that the window closes (i.e., the size of the active window equals to ω).
It is a possible behaviour of the S2R operator and we think it is correct. The
second consideration is related to the fact that CQELS always output ex:r2 in
each experiment. We are not able to explain if this behaviour is correct or not,
because we are not able to control the t0 of the window as in the other systems4.
We tried to probe, but on the one hand its source code is not available on the
Web site; and on the other hand some bugs of CQELS (see Appendix A.2) do
not allow us to investigate through additional experiments.</p>
        <p>Finally, it is worth to note that in this experiment we considered a tumbling
window, but it is easy to observe that there are multiple correct results every
time the query defines a time-based sliding window with slide β &gt; 1. In general,
no one of the operational semantics of the RDF stream engines we considered
can determine a unique correct answer given a stream and a query.
3 The scope of W2 at 12 is [2,12).
4 In C-SPARQL and SPARQLstream, the window W is created when the first query
with W in its FROM STEAM clause is submitted to the system (i.e., t0 equals to the
query registration time). We are not able to determine if it is true also for CQELS.
A.2</p>
      </sec>
      <sec id="sec-6-2">
        <title>Experiment 2 - Adherence tests for window</title>
        <p>As scenario we use the same of the first experiment: there are two connected
rooms, r1 and r2; , r1 and r2; each room has a sensor able to detect the individuals
(m1, m2, m3 and m4) when they are inside. The stream contains triples in the
form:
indicating that the individual mi has been detected in the room rj.</p>
        <p>
          We set up the stream S2 depicted in Figure 2: it contains four triples Sk
(k ∈ [
          <xref ref-type="bibr" rid="ref1 ref4">1, 4</xref>
          ]). Every 5 seconds a triple is sent (so, S1 has time stamp 0, S2 has
time stamp 5 and so on). We want to know in which room there are two
(different) individuals that enter within 3 seconds. To do it, we defines a time-based
tumbling window W with size and slide ω = β = 3 seconds. For the sake of
brevity, we report here only the query in CQELS syntax (Listing 1.2); queries
for the other two systems and the code to repeat the experiment are available
at: http://streamreasoning.org/benchmarks/bersys2013.
        </p>
        <p>Looking at the picture it is easy to observe that the query should return
only empty answers or no answers if the engine only reports at content change:
timestamped triples in S2 with the same room have a time difference of 10
seconds (respectively S1 and S3 for r1, S2 and S4 for r2). Additionally the
minimum time-distance between two triples is of 5 seconds, which is greater
than the window size ω: the window never contains two triples, so the WHERE
clause of the query is never satisfied.</p>
        <p>Listing 1.2. CQELS query used in Experiment 2</p>
        <p>We performed this test on the three RDF stream engines we are considering,
and while C-SPARQL and SPARQLstream behaved in the correct way, CQELS
returned wrong answers in W4 and W5, as shown in Table 2.</p>
        <p>SELECT ?p1 ?p2 ?room
WHERE {</p>
        <p>STREAM &lt;http://ex.org/streams/test&gt; [RANGE 3s SLIDE 3s] {
?p1 &lt;http://ex.org/detectedAt&gt; ?room .</p>
        <p>?p2 &lt;http://ex.org/detectedAt&gt; ?room
Listing 1.3. CQELS query used in Experiment 2 – without FILTER clause
The new result is reported in Table 3. C-SPARQL and SPARQLstream returns
the empty-result or mappings where ?p1 and ?p2 bind the same individual or
empty result (in other words the join are performed on the same RDF triple).
CQELS returns also performs the join over different RDF triples. We traced this
CQELS behaviour to a possible incorrect removal ofthe triples from the window.</p>
        <p>C-SPARQL
Block ts ?p1 ?p2 ?room ts ?p1</p>
        <p>CQELS SPARQLstream</p>
        <p>?p2 ?room ts ?p1 ?p2 ?room
3 ex:m1 ex:m1 ex:r1 0 ex:m1 ex:m1 ex:r1 3 ex:m1 ex:m1 ex:r1
6 ex:m2 ex:m2 ex:r2 5 ex:m2 ex:m2 ex:r2 6 ex:m2 ex:m2 ex:r2
no result no result 9 – – –
12 ex:m3 ex:m3 ex:r1 10 ex:m1 ex:m3 ex:r1 12 ex:m3 ex:m3 ex:r1
ex:m3 ex:m3 ex:r1
no result no result 15 – – –
18 ex:m4 ex:m4 ex:r2 15 ex:m2 ex:m4 ex:r2 18 ex:m4 ex:m4 ex:r2
ex:m4 ex:m4 ex:r2</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Della</given-names>
            <surname>Valle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Ceri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Van Harmelen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Fensel</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          :
          <article-title>It's a Streaming World! Reasoning upon Rapidly Changing Information (</article-title>
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Barbieri</surname>
            ,
            <given-names>D.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Braga</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ceri</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Valle</surname>
            ,
            <given-names>E.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grossniklaus</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>C-sparql: a continuous query language for rdf data streams</article-title>
          .
          <source>Int. J. Semantic Computing</source>
          <volume>4</volume>
          (
          <issue>1</issue>
          ) (
          <year>2010</year>
          )
          <fpage>3</fpage>
          -
          <lpage>25</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Calbimonte</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeung</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corcho</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aberer</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Enabling Query Technologies for the Semantic Sensor Web</article-title>
          .
          <source>International Journal on Semantic Web and Information Systems</source>
          <volume>8</volume>
          (
          <issue>1</issue>
          ) (
          <year>2012</year>
          )
          <fpage>43</fpage>
          -
          <lpage>63</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Le-Phuoc</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dao-Tran</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A native and adaptive approach for unified processing of linked streams and linked data</article-title>
          .
          <source>In: International Semantic Web Conference (ISWC</source>
          <year>2011</year>
          ). Volume
          <volume>1380</volume>
          ., Bonn, Germany, Springer (
          <year>2011</year>
          )
          <fpage>370</fpage>
          -
          <lpage>388</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. Zhang,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.D.</given-names>
            ,
            <surname>Corcho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            ,
            <surname>Calbimonte</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.P.:</surname>
          </string-name>
          <article-title>SRBench: A Streaming RDF/SPARQL Benchmark</article-title>
          . In: International Semantic Web Conference (ISWC
          <year>2012</year>
          ), Boston, USA (
          <year>2012</year>
          )
          <fpage>641</fpage>
          -
          <lpage>657</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Le-Phuoc</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dao-Tran</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pham</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Linked Stream Data Processing Engines: Facts and Figures</article-title>
          . In: International Semantic Web Conference (ISWC
          <year>2012</year>
          ). Volume
          <volume>1380</volume>
          ., Boston, USA, Springer (
          <year>2012</year>
          )
          <fpage>300</fpage>
          -
          <lpage>312</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Arasu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Babu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Widom</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The CQL continuous query language : semantic foundations</article-title>
          .
          <source>The VLDB Journal</source>
          <volume>15</volume>
          (
          <issue>2</issue>
          ) (
          <year>2006</year>
          )
          <fpage>121</fpage>
          -
          <lpage>142</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Botan</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Derakhshan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dindar</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haas</surname>
            ,
            <given-names>L.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>R.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tatbul</surname>
          </string-name>
          , N.:
          <article-title>Secret: A model for analysis of the execution semantics of stream processing systems</article-title>
          .
          <source>PVLDB</source>
          <volume>3</volume>
          (
          <issue>1</issue>
          ) (
          <year>2010</year>
          )
          <fpage>232</fpage>
          -
          <lpage>243</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Arasu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cherniack</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Galvez</surname>
          </string-name>
          , E.:
          <article-title>Linear road: A stream data management benchmark</article-title>
          .
          <source>In: International Conference on Very Large Data Bases (VLDB</source>
          <year>2004</year>
          ), Toronto, Canada, Morgan Kaufmann Publishers Inc. (
          <year>2004</year>
          )
          <fpage>480</fpage>
          -
          <lpage>491</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>