<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Stream-temporal Querying with Ontologies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ralf Möller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Neuenstadt</string-name>
          <email>neuenstadt@ifis.uni-luebeck.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Özgür L. Özçep</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Information Systems (Ifis) University of Lübeck Lübeck</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Recent years have seen theoretical and practical efforts on temporalizing and streamifying ontology-based data access (OBDA). This paper contributes to the practical efforts with a description/evaluation of a prototype implementation for the stream-temporal query language framework STARQL. STARQL serves the needs for industrially motivated scenarios, providing the same interface for querying historical data (reactive diagnostics) and for querying streamed data (continuous monitoring, predictive analytics). We show how to transform STARQL queries w.r.t. mappings into standard SQL queries, the difference between historical and continuous querying relying only in the use of a static window table vs. an incrementally updated window table. Experiments with a STARQL prototype engine using the PostgreSQL DBMS show the implementability and feasibility of our approach.</p>
      </abstract>
      <kwd-group>
        <kwd>stream reasoning</kwd>
        <kwd>OBDA</kwd>
        <kwd>monitoring</kwd>
        <kwd>temporal reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        This paper contributes results to recent efforts for adapting the paradigm of
ontology-based data access (OBDA) to scenarios with streaming data [
        <xref ref-type="bibr" rid="ref12 ref22 ref6">12,6,22</xref>
        ]
as well as temporal data [
        <xref ref-type="bibr" rid="ref4 ref5">5,4</xref>
        ]. It extends our previous work [
        <xref ref-type="bibr" rid="ref19 ref20 ref21">20,21,19</xref>
        ]—started
in the context of the IP7 EU project Optique and resulting in the query
language framework STARQL (Streaming and Temporal ontology Access with a
Reasoning-based Query Language)—with a proof-of-concept implementation that
is based on PostgreSQL. STARQL serves the needs for industrially motivated
scenarios, providing the same interface for querying historical data—as needed
for reactive diagnostics—and for querying streamed data—as needed for
continuous monitoring and predictive analytics in real-time scenarios.
      </p>
      <p>
        Considering streams, the main challenge is that data cannot be processed as
a whole. The simple but fundamental idea is to apply a (small) sliding window
which is updated as new elements from the stream arrive at the query-answering
system (see, e.g., [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]). The idea of previous approaches adapting OBDA to
streams [
        <xref ref-type="bibr" rid="ref12 ref22 ref6">12,6,22</xref>
        ] is that answering continuous ontology-level queries on streams
can be reduced to answering ontology-level queries on dynamically updated
finite window contents, which are treated as single ABoxes. This approach can
lead to unintended inconsistencies, as exemplified by industrial use cases such as
that of Siemens, one of the industrial stakeholders in the Optique project. For
example, multiple measurement values for a sensor collected at different time
points lead to inconsistencies since the value association should be functional.
In STARQL, the idea of processing streams with a window operator is pushed
further by defining a finite sequence of consistent ABoxes for each window.
      </p>
      <p>
        Considering temporal reasoning for reactive diagnostics as needed for the
Optique use case provided by Siemens [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], we found that a window based
approach leads to elegant solutions as well. A window is used to focus on some
subset of temporal data in a temporal query. Now, over time, different foci are
relevant for reactive diagnosis. Thus, foci changes can be modelled by a window
virtually “sliding” over temporal data, albeit the case that sliding is not done in
realtime. Thus, STARQL is defined such that it can be used equally for temporal
and stream reasoning. The semantics of STARQL does not distinguish between
the realtime and the historical querying scenario.
      </p>
      <p>The ABox sequencing strategy for windows required to avoid inconsistencies,
as argued above, makes rewriting and, more importantly, unfolding of STARQL
queries a challenging task. In particular, one may ask whether it is possible to
rewrite and unfold one STARQL query into a single backend query formulated
in the query language provided by the backend systems. We show that
(Postgre)SQL transformations are indeed possible and describe them in the paper for
the special case of one stream with slide parameter identical to pulse parameter
and a specific sequencing strategy (for the details see the following section).
2</p>
    </sec>
    <sec id="sec-2">
      <title>The STARQL Framework</title>
      <p>
        We recap the syntax and the semantics of STARQL with a simple example. For a
complete formal treatment we refer the reader to [
        <xref ref-type="bibr" rid="ref20 ref21">20,21</xref>
        ]. We assume familiarity
with the description logic DL-Lite [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>For illustration purposes, our running example is a measurement scenario in
which there is a (possibly virtual) stream SMsmt of ABox assertions. A stream
of ABox assertions is an infinite set of timestamped ABox assertions of the form
axhti. The timestamps stem from a flow of time (T; ) where T may even be a
dense set and where is a linear order. The initial part of SMsmt , called SM5smst
here, contains timestamped ABox assertions giving the value of a temperature
sensor s0 at 6 time points starting with 0s.</p>
      <p>SM5smst = fval (s0; 90 )h0si; val (s0; 93 )h1si; val (s0; 94 )h2si</p>
      <p>val (s0; 92 )h3si; val (s0; 93 )h4si; val (s0; 95 )h5sig</p>
      <p>Assume further, that a static ABox contains knowledge on sensors telling,
e.g., which sensor is of which type. In particular, let BurnerT ipT empSens(s0) be
in the static ABox. Moreover, let there be a pure DL-Lite TBox with additional
information such as BurnerT ipT empSens v T empSens saying that all burner
tip temperature sensors are temperature sensors.</p>
      <p>The Siemens engineer has the following information need: Starting with time
point 00:00 on 1.1.2005, tell me every second those temperature sensors whose
value grew monotonically in the last 2 seconds. A possible STARQL
representation of the information is illustrated in Listing 1.
1 PREFIX : &lt;http :// www . optique - project . eu / siemens &gt;
2 CREATE STREAM S_out AS
3 CONSTRUCT GRAPH NOW { ?s rdf : type MonInc }
4 FROM STREAM S_Msmt [NOW -2s , NOW ] - &gt;"1S "^^ xsd : duration
5 WITH START = "2005 -01 -01 T01 :00:00 CET "^^ xsd : dateTime ,
6 END = "2005 -01 -01 T02 :00:00 CET "^^ xsd : dateTime
7 STATIC ABOX &lt;http :// www . optique - project . eu / siemens / ABoxstatic &gt;,
8 TBOX &lt;http :// www . optique - project . eu / siemens / TBox &gt;
9 USING PULSE WITH
10 START = "2005 -01 -01 T00 :00:00 CET "^^ xsd : dateTime ,
11 FREQUENCY = "1 S "^^ xsd : duration
12 WHERE { ?s rdf : type : TempSens }
13 SEQUENCE BY StdSeq AS seq
14 HAVING FORALL ?i &lt; ?j IN seq ,?x ,? y:
15 IF ( GRAPH ?i { ?s : val ?x } AND GRAPH ?j { ?s : val ?y }) THEN ?x &lt;= ?y</p>
      <p>Listing 1: Example query in STARQL</p>
      <p>After the create expressions for the stream and the output frequency the
queries’ main contents are captured by the CONSTRUCT expressions. The construct
expression describes the output format of the stream, using the named-graph
notation of SPARQL for fixing a basic graph pattern (BGP) and attaching a
time expression, here NOW, for the evolving time. The actual result in the example
(in DL notation) is a stream of ABox assertions of the form M onInc(s0)hti.
Sou5ts = fM onInc(s0)h0si; M onInc(s0)h1si; M onInc(s0)h2si; M onInc(s0)h5sig</p>
      <p>
        The WHERE clause binds variables w.r.t. the non-streaming sources (ABox,
TBox) mentioned in the FROM clause by using (unions) of BGPs. We assume an
underlying DL-Lite logic for the static ABox, the TBox and the BGP (considered
as unions of conjunctive queries UCQs) which allows for concrete domain values,
e.g., DL-LiteA [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In this example, instantiations of the sensors ?s are fixed
w.r.t. a static ABox and a TBox. The semantics of the UCQs embedded into the
WHERE and the HAVING clause ist the certain answer semantics [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>The heart of the STARQL queries is the window operator in combination
with sequencing. The operator [NOW-2s, NOW]-&gt;"1S"^^xsd:duration used in
Listing 1 describes a sliding window, which collects the timestamped ABox
assertions in the last two seconds and slides 1s forward in time. Note that the
START and END specifications over the stream: These make sense only for
temporal queries over streamed historical data (see Sect. 3.1).</p>
      <p>Every temporal ABox produced by the window operator is converted to a
sequence of (pure) ABoxes. The sequence strategy determines how the
timestamped assertions are sequenced into ABoxes. The sequencing method used in
the example is standard sequencing (StdSeq): assertions with the same
timestamp come into the same Abox. So, in the example the resulting sequence of
ABoxes at time point 5s is trivial as there are no more than two ABox assertions
with the same timestamp: fval (s0; 92 )gh0i; fval (s0; 93 )gh1i; fval (s0; 95 )gh2ig.</p>
      <p>Now, at every time point, one has a sequence of ABoxes on which temporal
(state-based) reasoning can be applied. This is realized in STARQL by a sorted
first-order logic template in which state stamped UCQs conditions are
embedded. We use here again the GRAPH notation from SPARQL. In our example the
HAVING clause expresses a monotonicity condition stating that for all values
?x that are values of sensor ?s w.r.t the ith ABox (subgraph) and for all values
?y that are values of the same sensor ?s w.r.t. the jth ABox (subgraph), it must
be the case that ?x is less than or equal to ?y.</p>
      <p>
        The grammar for the HAVING clause (not shown here) exploits a safety
mechanism. Without it a HAVING clause such as ?y &gt; 3, with free concrete domain
variable ?y over the reals, would be allowed: the set of bindings for ?y would
be infinite. Even more, the safety conditions guarantees that the evaluation of
the HAVING clause on the ABox sequence depends only on the active domain
not the whole domain, i.e., HAVING clauses are domain independent (d.i.) (see
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for a definition of domain independence). This in turn guarantees that the
HAVING clause can be smoothly transformed into queries of d.i. languages such
as SQL or CQL [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. For the details we refer again to [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>OBDA Transformation of STARQL</title>
      <p>
        As the HAVING clause language is d.i. (see [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]), STARQL can be used as
ontology query language in the OBDA paradigm: STARQL queries can be transformed
into queries over data sources that are quipped with d.i. query languages.
      </p>
      <p>
        As (backend) data source candidates we consider any DBMS providing a
declarative language such as SQL. This is not a limitation in comparison with
those approaches (in particular our own [
        <xref ref-type="bibr" rid="ref19 ref21">21,19</xref>
        ]) that allow relational data stream
management systems (DSMS) as data sources. In fact, the STARQL prototype
in the Optique platform uses a stream-extension of ADP [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] which provides a
CQL-like [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] query language. For the transformation of this paper we by-pass
the additional abstraction of DSMS by reconstructing the implementation of the
relational window operators on top of incrementally updated window tables: The
operators are the same as for ordinary RDBMS, but the tables are dynamic.
      </p>
      <p>
        Because our transformation does not rely on a window operator on the
backend side but constructs the window contents within a window table, two different
implementations become possible: 1. Preprocesses the data in the backend in
order to materialize the window table for the whole time interval fixed within
the STARQL query. The abstract computation model for this implementation is
combined rewriting: The given query is rewritten w.r.t. the TBox and the
rewritten query is posed to a pre-processed ABox resulting from the original ABox by
(partially) materializing TBox axioms (see [
        <xref ref-type="bibr" rid="ref15 ref17 ref18">18,17,15</xref>
        ]). 2. Generate the window
contents on the fly—during query run time. The abstract computation model is
that of classical OBDA, in which the query is rewritten w.r.t. the TBox, unfolded
w.r.t. the mappings and issued to original data—without any preprocessing of
the data source. Our experiments below show that the second approach
outperforms the first one. But the former approach is useful as a caching means for
Datalog&amp;Transformer&amp;
      </p>
      <p>Datalog&amp;
Transforma=on&amp;</p>
      <p>Rules&amp;</p>
      <p>STARQL&amp;
Grammar&amp;
Parser&amp;</p>
      <p>Query&amp;&amp;
(Data&amp;
Structure)&amp;</p>
      <p>Normaliza
=on&amp;Rules&amp;
Normalizer&amp;</p>
      <p>Query&amp;
(Normal&amp;</p>
      <p>Form)&amp;
Datalog&amp;
Program&amp;</p>
      <p>Op=mized&amp;
Datalog&amp;</p>
      <p>Program&amp;</p>
      <p>SQL&amp;
Transforma=on&amp;</p>
      <p>Rules&amp;
Op=mizer&amp;
Op=miza
=on&amp;Rules&amp;</p>
      <p>SQL&amp;Query&amp;
and&amp;Views&amp;
SQL&amp;Transformer&amp;</p>
      <p>Mapping
Rules&amp;
scenarios with multiple query processing where (many of) the queries use the
same windows on the same streams.</p>
      <p>As mentioned before, the aim of this paper is to give a proof-of-concept
implementation for a stream-temporal engine. In particular, regarding the required
OBDA transformations from the ontology layer to the backend sources, the
engine is applicable to the following special case: There is only one stream, the slide
is identical to the pulse, and the sequencing strategy is standard sequencing.
3.1</p>
      <sec id="sec-3-1">
        <title>Temporal Reasoning by Mapping STARQL to SQL</title>
        <p>For reactive diagnosis as investigated in Optique with the Siemens power plant
use case, specific patterns, aka events, are to be found in previously recorded
data. Data represents 1.) measurement values of sensors and 2.) turbines with
assembly groups, mounted sensors and their properties. Reactive diagnosis deals
with analyzing data in order to understand, e.g., reasons for engine shutdown.</p>
        <p>If we consider again the information need discussed in Section 2, it becomes
clear that in the Siemens use case, patterns to be detected are defined w.r.t.
certain time windows in which relevant events take place (e.g., monotonically
increasing in a window of 10 minutes, say). Thus, in this context, temporal
queries are formulated using time windows in the same spirit as time windows
are used for continuous queries in stream processing systems.</p>
        <p>The same queries can be registered as a continuous query or a historical query.
For the latter, it should obviously be possible to specify a period of interest, i.e.,
a starting point and an end point for finding answers.</p>
        <p>We consider the example query of Listing 1 for demonstrating the
transformation of historical queries. Figure 1 shows how historical STARQL queries are
processed internally. The prototype is implemented in Prolog, and the rule sets
used as inputs in Fig. 1 are Prolog (DCG) rules. A query is parsed in order
to produce parse tree data structures, which then are normalized.
Normalization converts FORALL expressions in the HAVING clause into NOT EXISTS
NOT expressions by pushing negation inside. The following listing shows the
normalized HAVING part of our example query.
1 HAVING NOT EXISTS ?i &lt; ?j IN seq ?x , ?y :
2 ( GRAPH ?i { ?s val ?x } AND GRAPH ?j { ?s val ?y } AND ?x &gt; ?y ) ;
The normalized query is translated into a datalog program (see Figure 1) with
fresh predicate names being generated automatically. The WHERE expression
is rewritten and unfolded due to the axioms in the TBox given in the query
and the mapping rules, respectively. Here we assume that sensors are created by
an SQL view defined below. In the body of the first rule, datalog code for the
WHERE clause is inserted (rule q0, see Listing 2)). For every language element
found in the parse tree (e.g., NOT EXISTS...), we generate a new datalog rule.</p>
        <p>For every {?x rdf:type C} ({?x R ?y})) mentioned in the WHERE or
HAVING clause we insert C(WID, X) (R(WID, X, Y)) in the datalog program.
WID is an implicit parameter representing a so-called window ID.
Correspondingly, for every GRAPH i {?x rdf:type C} (GRAPH i {?x R ?y}) mentioned in
the HAVING clause we insert C(WID, X, i) (R(WID, X, Y, i)) with i
representing the specific ABox in each window sequence.</p>
        <p>The datalog clauses q1 to q4 in Listing 2 are generated automatically from
the HAVING clause of the query above.
1 q0 (WID , S) :- sensor (WID , S).
2 q1 (WID , S) :- q0 (WID , S) , not q2 (WID , S).
3 q2 (WID , S) :- seq (WID , I) , seq (WID , J) , q3 (WID , I , J , S) , I &lt; J.
4 q3 (WID , I , J , S) :- q4 (WID , I , J , X , Y , S).
5 q4 (WID , I , J , X , Y , S) :- val (WID , I , X , S) , val (WID , J , Y , S) , X &gt; Y.</p>
        <p>Listing 2: Generated datalog rules</p>
        <p>The mapping specifications of the stream s_msmt, the relations val and
sensor are predefined using datalog clauses and added to the datalog rules
from a file. The additional mapping rules are shown in Listing 3.
1 val (WID , Index , Sensor , Value )
:2 window (WID , Index , Timestamp ) ,
3 measurement ( Timestamp , Sensor , Value ).
4 sensor (WID , Sensor ) :- val (WID , _Index , Sensor , _Value ).
5 seq (WID , Index ) :- window (WID , Index , _Timestamp ).</p>
        <p>Listing 3: Mapping specifications using datalog clauses</p>
        <p>During clause generation, SQL transformation rules are generated (see
Figure 1). Transformation rules represent the name of the SQL relation, the number
of arguments as well as the type of each parameter. SQL transformation are
statically specified also for the relations val, window, and seq.</p>
        <p>For the CREATE STREAM and CONSTRUCT expression in the query, a clause
1 s_out (T , S) :- q1 (WID , S) , window_range (WID , T).</p>
        <p>is added to the datalog program.</p>
        <p>The datalog program generated by the module Datalog Transformer (see
Figure 1) is then optimized. Optimization eliminates wrapper clauses. An according
optimization of Listing 2 is shown below.
1 q1 (WID , S) :- sensor (WID , S) , not q2 (WID , S).
2 q3 (WID , I , J , S) :- seq (WID , I) , seq (WID , J) , val (WID , I , X , S) ,
3 val (WID , J , Y , S) , X &gt; Y , I &lt; J.</p>
        <p>Here, the datalog rules q0 and q1 have been reformulated to q1 and q2 to q4
from Listing 2 have been reformulated by eliminating wrapper clauses to q3.</p>
        <p>
          Moreover, body atoms are removed if bindings are already generated by other
atoms. In our example we see that a seq clause is already a subclause of the val
clause (Listing 3). Using semantic query optimization [
          <xref ref-type="bibr" rid="ref8 ref9">8,9</xref>
          ], both seq atoms can
be eliminated in q3. As a consequence also the clause for seq is eliminated.
1 s_out ( Right , S) :- q1 (WID , S) , window_range (WID , _Left , _Right ).
2 q1 (WID , S) :- sensor (WID , S) , not q3 (WID , S).
3 q3 (WID , S) :- val (WID , I , X , S) , val (WID , J , Y , S) , I &lt; J , X &gt; Y.
4 sensor (WID , Sensor ) :- val (WID , _Index , Sensor , _Value ).
5 val (WID , Index , Sensor , Value )
:6 window (WID , Index , Timestamp ) ,
7 measurement ( Timestamp , Sensor , Value ).
        </p>
        <p>Listing 4: Optimized datalog rules
The datalog program is non-recursive and safe. So it can be translated to SQL
as shown in Listing 5. The column names for relations are given by the SQL
Transformation Rules generated by the Datalog Transformer and by mapping
rules given as input to the processing pipeline (see Figure 1).</p>
        <p>The translation to SQL relies on tables window and window_range
(Listing 6). These are based on the stream specification(s) in the query. Given start
($start$) and end ($end$) points for accessing temporal data as well as window
size ($window_size$) and window slide ($slide$), a representation for all
possible windows (with specific time points for the window range) together with all
indexes for states that are built for the window by the specified sequencing method
specified are computed. For standard sequencing, window and window_range are
generated using PostgreSQL functions such as generate_series.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Transformations for Continuous STARQL Querying</title>
        <p>
          The transformation above applies to temporal queries on historical data stored
in a RDBMS. So, the window table generation as part of the transformation
above is a one-step generation. In order to cope with streaming data, the
transformation process has slightly to be adapted. The window table now is assumed
to be incrementally updated by some function. Apart from that, the same
transformation as for temporal queries can be applied to realize continuous querying
with STARQL. In fact, the second implementation of the transformation that
we tested does not materialize the whole window table, and so it can be directly
adapted to DBs with dynamically updated entries. Similar ideas for continuos
processing are used for TelegraphCQ [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], an DSMS built on top of PostgreSQL.
        </p>
        <p>CREATE VIEW q1 AS
SELECT rel1 .WID , rel1 . SID AS S
FROM sensor rel1
WHERE NOT EXISTS ( SELECT *</p>
        <p>FROM q3 rel2
WHERE rel2 . WID = rel1 . WID AND rel2 .S = rel1 . SID );</p>
        <p>Listing 5: SQL transformation result
1 CREATE TABLE window_range AS
2 SELECT row_number () OVER ( ORDER BY x. timestamp ) - 1 AS wid ,
3 x. timestamp as left
4 x. timestamp + $window_size$ as right
5 FROM ( SELECT generate_series ( $start$ , $end$ , $slide$ ) AS timestamp ) x ;
6
7 CREATE VIEW wid AS
8 SELECT DISTINCT r.wid , mp . timestamp
9 FROM measurement mp , window_range r
10 WHERE mp . timestamp BETWEEN r. left AND r. right ;
11
12 CREATE TABLE window AS
13 SELECT wid , rank () OVER ( PARTITION BY wid ORDER BY timestamp ASC ) as ind ,
timestamp
14 FROM wid ;</p>
        <p>Listing 6: Window (range) tables
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Evaluation</title>
      <p>The system is evaluated along two example STARQL queries for reactive
diagnosis in the Siemens use case. They representatives for queries expected to demand
processing times that are quadratic (Query1) and linear (Query2). The engine
transforms the queries w.r.t. predefined mappings into PostgreSQL queries.</p>
      <p>
        The datasets that we use and describe in the following are part of the Siemens
use case [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] in Optique. The original data processed/produced by Siemens
appliances are sensor measurements, event data, operation logs, and other data
stored in tables. These data are confidential, so Siemens provided a small public
Dataset Total Measured Timespan Sensors Measurements Total Data Size
      </p>
      <p>Values per Day/Sensor
Ds1 82080 3 Days 19 1440 5 MB
Ds2 210,000 1506 Days 3 46.5 10 MB
Ds3 515,845,000 1824 Days 204 1386 23,000 MB
dataset and two larger anonymized datasets for use inside Optique. The public
dataset (approximately 100 MB) has a simplified structure. For the evaluation
we used three datasets: Ds1, Ds2 and Ds3. Ds1 contains public data and has
a size of approximately 5MB. Ds2 contains anonymous data and has a size of
approximately 10 MB, and Ds3 contains anonymous data with size 23 GB.</p>
      <p>The (simplified) schema of the normalized tables is as follows:
CREATE TABLE measurement (timestamp,sensor,value);
CREATE TABLE sensor (id,assemblypart,name)</p>
      <p>Measurements are represented with a table measurement and consist of a
timestamp, a reference to the sensor, and a value. For our evaluation we are
using one dataset of about 82,000 entries with a timestamp ranging over 3 days
(Ds1), another dataset with about 210,000 entries with a timestamp ranging
over 5 years (Ds2), and a dataset (Ds3) with more than 500 million entries over
5 years. All datasets contain data referring to a number of sensors.</p>
      <p>The datasets differ in the total number of recorded values and also in the
number of measured values per timeframe. In Ds1 and Ds3 a value is measured
every minute. In Ds2 a value is measured only every 30 minute in average. So
we expect the calculation of a single time window for Ds1 and Ds3 to be more
difficult compared to the calculation for Ds2.</p>
      <p>Query1 (Listing 7) builds, within each window, a sequence of all sensor values
in the last 24 hours and checks whether one sensor increased monotonically. This
query is expected to run quadratically slower as window size increases, due to
the comparisons of all value pairs (?x, ?y) for all pairs of states (i,j).</p>
      <p>Query2 (Listing 8) outputs, every minute, sensors that show a value higher
than 90. This query is expected to be faster because of its simple window content
with at most one timestamp. Both queries can be transformed to PostgreSQL.
We show only the transformation for Query1 in Listing 9.</p>
      <p>The tests were run in a VM on a system with an i7 2.8 GHz CPU and 16
GB of ram. Mean values of several test runs with cold cache are shown in the
following tables. For our tests we used two approaches corresponding to combined
rewriting and classical rewriting, respectively (cf. Sect. 3), and for each of these
Query1 and Query2. In the first approach we pre-calculated all time windows
in one large table, where every window has a window id, evaluated both queries
once and a second time with additional window index structures added to the
table.
1 CREATE STREAM S_out1 AS
2 SELECT { ? sensor rdf : type : RecentMonInc }&lt;NOW &gt;
3 FROM burner_regulator [ NOW - 24 hours , NOW ]-&gt; 24 hours
4 SEQUENCE BY StdSeq AS seq
5 HAVING FORALL i , j IN seq , ?x ,? y
6 IF {? sensor : hasVal ?x}&lt;i &gt; AND { : Regulator : hasVal ?y }&lt;j &gt; AND i &lt; j
7 THEN ?x &lt;= ?y ELSE TRUE</p>
      <p>Listing 7: STARQL query Query1
1 CREATE STREAM S_out2 AS
2 SELECT { ? sens rdf : type : tooHigh }&lt;NOW &gt;
3 FROM burner_3 [ NOW , NOW ]-&gt; 1 minute
4 SEQUENCE BY StdSeq AS seq
5 HAVING FORALL i IN seq , ?x IF { ? sens : hasVal ?x }&lt;i &gt; THEN ?x &gt; 90</p>
      <p>Listing 8: STARQL query Query2
1 CREATE OR REPLACE VIEW window_range AS
2 SELECT row_number () OVER ( ORDER BY x. timestamp ) - 1 AS wid ,
3 x. timestamp as left , x. timestamp + '1 hour ':: interval as right
4 FROM ( SELECT generate_series ( MIN ( mp . timestamp ) ,
5 MAX ( mp . timestamp ) , '1 hour ':: interval ) AS timestamp FROM
measurement mp ) x;
6
7 CREATE OR REPLACE VIEW wid AS
8 SELECT distinct wid , timestamp
9 FROM measurement mp , window_range w
10 WHERE mp . timestamp &gt;= w. left and mp . timestamp &lt; w. right ;</p>
      <p>CREATE VIEW win AS
SELECT wid , rank () OVER ( PARTITION BY wid ORDER BY timestamp ASC ) as ind ,
timestamp FROM wid ;</p>
      <p>Listing 9: Query1 in PostgreSQL</p>
      <p>Results for the first approach are shown in Table 2. For every query you see
a column with times in seconds for the non-indexed and indexed table. The
nonindexed times consist of generating the table and evaluating the query. In the
indexed version we added a B-tree index. The precalculation column shows the
additional time for generating the window table, which has to be added in every
case to the next two evaluation columns for non-indexed and indexed data.</p>
      <p>There are different trade offs for both queries. For Query1 the total evaluation
time increases as the window size increases, the precalculation time stays the
same. The evaluation time for Ds1 increases faster as it has more measured
values per window, compared to Ds2, which only has about one measured value
per 30 minutes. Comparing the time for indexed and non-indexed data, one sees
that with progressing time more single windows are used per query. On Ds1 we
use only three windows and can decrease time from 30 to 26 seconds. On Ds2
we have about 1800 windows and can decrease the time from 17 to 8 seconds,
which is more than 50 percent. Query2 produces a lot of small windows, so the
evaluation time for each window is very short. The precalculation time for Ds2
increases a lot as we have about 2 million potential single windows in the window
table. With a timeframe of only three days, the precalculation time for Ds1 stays
small. The index has nearly no influence for Query2 as each window has only
one tuple entry and all windows are evaluated once.</p>
      <p>Ds3 could not be evaluated with the precalculation step, as it requires more
than 50 GB additional memory for the window table. Therefore, we implemented
a second approach by additional pl/pgsql functions. The idea was to generate
each window dynamically, evaluate it, and delete the memory afterwards.</p>
      <p>Results for the second approach are shown in Table 3. As each window is
generated dynamically, there is no precalculation. On the other hand, no indexes
can be added to a materialized table. The main disadvantage is that windows
without values can not be filtered out in advance. As there are potentially 2
million windows for Query2 on Ds2, the system tries also to generate the empty
windows, which increases the time a lot. Nevertheless, the problem of additional
required space is solved and also the complete 20GB of Ds3 can be evaluated.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Related Work</title>
      <p>
        Much of the relevant work on stream processing has been done in the context of
DSMS [
        <xref ref-type="bibr" rid="ref10 ref13 ref16 ref3">3,10,13,16</xref>
        ].
      </p>
      <p>
        First steps towards streamified OBDA are stream extensions of SPARQL
with a window operator having a multi-bag semantics where timestamps are
dropped [
        <xref ref-type="bibr" rid="ref12 ref22 ref6">12,6,22</xref>
        ]. This does not interfere with the potential inconsistency of
functional constraints on sensor values mentioned in the introduction, as these
approaches handle timestamps by reification, for example, talking about
measurements. Reification may lead to bulky representations of simple facts and
hinders expressing simple functionality constraints (as mentioned above) in OWL.
      </p>
      <p>
        The temporal OBDA approach of [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] uses an LTL based language with
embedded CQs and not a sorted FOL language. For engineering applications with
information needs as in the monotonicity example LTL is not sufficient, as it
does not provide exists quantifier on top of the embedded CQs.
      </p>
      <p>
        Though not directly related to OBDA, other relevant work stems from the
field of complex event processing. For example, EP-SPARQL/ETALIS [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] uses
also a sequencing constructor; and T-REX with the event specification language
TESLA [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] uses an FOL language for identifying patterns.
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Outlook</title>
      <p>The main objective in designing a query language that is intended to be used
in industry is to find the right balance between expressibility and feasibility.
OBDA goes for weak expressibility and high feasibility by choosing rewriting and
unfolding as methodology for query answering. But even in OBDA, feasibility is
not a feature one gets for free; rather it has to be achieved with optimizations.
So, for engines that are implemented according to the OBDA paradigm one has
to show that such transformations are theoretically possible and feasible.</p>
      <p>In this paper we argued that STARQL provides an adequate solution for
streamified and temporalized OBDA scenarios as that of Siemens because: 1. It
offers a semantics that allows a unified treatment of querying temporal and
streaming data. 2. It combines high expressivity with safeness to guarantee a
smooth transformation into standard domain independent backend queries such
as SQL. 3. It can be implemented in an engine that implements the
transformations s.t. acceptable query execution times are achievable, if run with the
optimization mentioned here.</p>
      <p>Future work contains, amongst others, the following: 1. extensive (scalability)
tests with known benchmarks for stream processing, 2. generalization of the
transformation to multiple streams where the slides parameters are not equal to
the pulse parameter and where non-standard-sequencing strategies are used, and
3. extensive comparison with other approaches, in particular CEP approaches.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abiteboul</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hull</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vianu</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          : Foundations of Databases. Addison-Wesley (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Anicic</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rudolph</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fodor</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stojanovic</surname>
          </string-name>
          , N.:
          <article-title>Stream reasoning and complex event processing in ETALIS</article-title>
          .
          <source>Semantic Web</source>
          <volume>3</volume>
          (
          <issue>4</issue>
          ),
          <fpage>397</fpage>
          -
          <lpage>407</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Arasu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Babu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Widom</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The CQL continuous query language: semantic foundations and query execution</article-title>
          .
          <source>The VLDB Journal 15</source>
          ,
          <fpage>121</fpage>
          -
          <lpage>142</lpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Artale</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kontchakov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zakharyaschev</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Temporal description logic for ontology-based data access</article-title>
          .
          <source>In: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence</source>
          . pp.
          <fpage>711</fpage>
          -
          <lpage>717</lpage>
          . IJCAI'
          <volume>13</volume>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Borgwardt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , Lippmann,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Thost</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          :
          <article-title>Temporal query answering in the description logic DL-Lite</article-title>
          . In: Fontaine,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Ringeissen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.A</surname>
          </string-name>
          . (eds.)
          <source>Frontiers of Combining Systems. LNCS</source>
          , vol.
          <volume>8152</volume>
          , pp.
          <fpage>165</fpage>
          -
          <lpage>180</lpage>
          . Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Calbimonte</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeung</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corcho</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aberer</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Enabling query technologies for the semantic sensor web</article-title>
          .
          <source>Int. J. Semant. Web Inf. Syst</source>
          .
          <volume>8</volume>
          (
          <issue>1</issue>
          ),
          <fpage>43</fpage>
          -
          <lpage>63</lpage>
          (
          <year>Jan 2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Calvanese</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Giacomo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lembo</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenzerini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poggi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>RodríguezMuro</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosati</surname>
          </string-name>
          , R.:
          <article-title>Ontologies and databases: The DL-Lite approach</article-title>
          . In: Tessaris,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Franconi</surname>
          </string-name>
          , E. (eds.)
          <article-title>Semantic Technologies for Informations Systems (RW 2009), LNCS</article-title>
          , vol.
          <volume>5689</volume>
          , pp.
          <fpage>255</fpage>
          -
          <lpage>356</lpage>
          . Springer (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Chakravarthy</surname>
          </string-name>
          , U.S.,
          <string-name>
            <surname>Grant</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Minker</surname>
          </string-name>
          , J.:
          <article-title>Foundations of semantic query optimization for deductive databases</article-title>
          .
          <source>In: Foundations of deductive databases and logic programming</source>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>273</lpage>
          . Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (
          <year>1988</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Chakravarthy</surname>
          </string-name>
          , U.S.,
          <string-name>
            <surname>Grant</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Minker</surname>
          </string-name>
          , J.:
          <article-title>Logic-based approach to semantic query optimization</article-title>
          .
          <source>ACM Trans. Database Syst</source>
          .
          <volume>15</volume>
          (
          <issue>2</issue>
          ),
          <fpage>162</fpage>
          -
          <lpage>207</lpage>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Chandrasekaran</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cooper</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deshpande</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Franklin</surname>
            ,
            <given-names>M.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hellerstein</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krishnamurthy</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Madden</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raman</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reiss</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>TelegraphCQ: Continuous dataflow processing for an uncertain world</article-title>
          .
          <source>In: CIDR</source>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Cugola</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Margara</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>TESLA: A formally defined event specification language</article-title>
          .
          <source>In: Proceedings of the Fourth ACM International Conference on Distributed EventBased Systems</source>
          . pp.
          <fpage>50</fpage>
          -
          <lpage>61</lpage>
          . DEBS '10,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>Della</given-names>
            <surname>Valle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Ceri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Barbieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Braga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Campi</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>A first step towards stream reasoning</article-title>
          . In: Domingue,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Fensel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Traverso</surname>
          </string-name>
          , P. (eds.) Future Internet - FIS
          <year>2008</year>
          ,
          <article-title>LNCS</article-title>
          , vol.
          <volume>5468</volume>
          , pp.
          <fpage>72</fpage>
          -
          <lpage>81</lpage>
          . Springer Berlin / Heidelberg (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Hwang</surname>
            ,
            <given-names>J.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xing</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Çetintemel</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zdonik</surname>
            ,
            <given-names>S.B.</given-names>
          </string-name>
          :
          <article-title>A cooperative, self-configuring high-availability solution for stream processing</article-title>
          .
          <source>In: ICDE</source>
          . pp.
          <fpage>176</fpage>
          -
          <lpage>185</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Kharlamov</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Solomakhina</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Özcep</surname>
            ,
            <given-names>Ö.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zheleznyakov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hubauer</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lamparter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roshchin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soylu</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>How semantic technologies can enhance data access at siemens energy</article-title>
          .
          <source>In: Proceedings of the 13th International Semantic Web Conference (ISWC</source>
          <year>2014</year>
          )
          <article-title>(</article-title>
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Kontchakov</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lutz</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zakharyaschev</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>The combined approach to ontology-based data access</article-title>
          . In: Walsh,
          <string-name>
            <surname>T</surname>
          </string-name>
          . (ed.)
          <source>IJCAI</source>
          . pp.
          <fpage>2656</fpage>
          -
          <lpage>2661</lpage>
          . IJCAI/AAAI (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Krämer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seeger</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Semantics and implementation of continuous sliding window queries over data streams</article-title>
          .
          <source>ACM Trans. Database Syst</source>
          .
          <volume>34</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>49</lpage>
          (
          <year>Apr 2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Lutz</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Conjunctive query answering in the description logic EL using a relational database system</article-title>
          .
          <source>In: Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI-09)</source>
          . AAAI Press (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Lutz</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolter</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Conjunctive query answering in EL using a database system</article-title>
          .
          <source>In: In Proceeding of the 5th International Workshop on OWL: Experiences and Directions (OWLED</source>
          <year>2008</year>
          )
          <article-title>(</article-title>
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Özçep</surname>
          </string-name>
          , Ö. L..,
          <string-name>
            <surname>Möller</surname>
          </string-name>
          , R.:
          <article-title>Ontology based data access on temporal and streaming data</article-title>
          . In: Koubarakis,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Stamou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Stoilos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Horrocks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            ,
            <surname>Kolaitis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Lausen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Weikum</surname>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          . (eds.)
          <source>Reasoning Web. Reasoning and the Web in the Big Data Era. Lecture Notes in Computer Science</source>
          , vol.
          <volume>8714</volume>
          . (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Özçep</surname>
            ,
            <given-names>Ö.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Möller</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neuenstadt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zheleznyakov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kharlamov</surname>
          </string-name>
          , E.:
          <article-title>Deliverable D5.1 - A semantics for temporal and stream-based query answering in an OBDA context</article-title>
          .
          <source>Deliverable FP7-318338</source>
          , EU (
          <year>October 2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Özçep</surname>
            ,
            <given-names>Özgür.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Möller</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neuenstadt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>A stream-temporal query language for ontology based data access</article-title>
          .
          <source>In: KI 2014. LNCS</source>
          , vol.
          <volume>8736</volume>
          , pp.
          <fpage>183</fpage>
          -
          <lpage>194</lpage>
          . Springer International Publishing Switzerland (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Phuoc</surname>
            ,
            <given-names>D.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dao-Tran</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parreira</surname>
            ,
            <given-names>J.X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hauswirth</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A native and adaptive approach for unified processing of linked streams and linked data</article-title>
          .
          <source>In: International Semantic Web Conference (1)</source>
          . pp.
          <fpage>370</fpage>
          -
          <lpage>388</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Schlatte</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Möller</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giese</surname>
            ,
            <given-names>C.N.M.</given-names>
          </string-name>
          :
          <article-title>Deliverable D1.1 - joint phase 1 evaluation and phase 2 requirement analysis</article-title>
          .
          <source>Deliverable FP7-318338</source>
          , EU (
          <year>October 2013</year>
          ), publicly available at http://www.optique-project.eu/results/ public-deliverables/
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Tsangaris</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kakaletris</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kllapi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Papanikos</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pentaris</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polydoras</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sitaridi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stoumpos</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ioannidis</surname>
            ,
            <given-names>Y.E.</given-names>
          </string-name>
          :
          <article-title>Dataflow processing and optimization on grid and cloud infrastructures</article-title>
          .
          <source>IEEE Data Eng. Bull</source>
          .
          <volume>32</volume>
          (
          <issue>1</issue>
          ),
          <fpage>67</fpage>
          -
          <lpage>74</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>