<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>DIVIDE: Adaptive Context-Aware Query Derivation for IoT Data Streams</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mathias De Brouwer</string-name>
          <email>mrdbrouw.DeBrouwer@UGent.be</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dorthe Arndt</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pieter Bont</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Filip D</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AA Tower</institution>
          ,
          <addr-line>Technologiepark-Zwijnaarde 122, B-9052 Ghent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ghent University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In the Internet of Things, it is a challenging task to integrate &amp; analyze high velocity sensor data with domain knowledge &amp; context information in real-time. Semantic IoT platforms typically consist of stream processing components that use Semantic Web technologies to run a set of xed queries processing the IoT data streams. Con guring these queries is still a manual task. To deal with changes in context information, which happen regularly in IoT domains, queries typically require reasoning on all sensor data in real-time to derive relevant sensors &amp; events. This can be an issue in real-time, as expressive reasoning is required to deal with the complexity of many IoT domains. To solve these issues, this paper presents DIVIDE. DIVIDE automatically derives queries for stream processing components in an adaptive, context-aware way. When the context changes, it derives through reasoning which sensors &amp; observations to lter, given the context &amp; a use case goal, without requiring any more reasoning in real-time. This paper presents the details of DIVIDE, and performs evaluations on a healthcare example showing how it can reduce real-time processing times, scale better when there are more sensors &amp; observations, and can run e ciently on low-end devices.</p>
      </abstract>
      <kwd-group>
        <kwd>Internet of Things Context-aware query derivation Reasoning RDF stream processing N3</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        In the Internet of Things (IoT), there exists a large collection of
internetconnected devices and sensors. IoT-enabled sensors constantly generate data.
The advantage of the IoT is that this data can be easily integrated and combined
with existing domain knowledge and context information. In this way, devices
and applications are able to process and analyze the combined sensor &amp; context
data in order to perform context-aware monitoring of the environment [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
      </p>
      <p>
        The data generated by IoT devices is typically voluminous, heterogeneous,
and has a high velocity [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. As such, it is a challenging task to integrate and
analyze this data on the y, in order to extract meaningful insights and actuate on it.
      </p>
      <p>
        To deal with these challenges, Semantic Web technologies can be deployed [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
Typical semantic IoT platforms consist of one or more streaming components
that use queries to continuously process the generated data streams. The
heterogeneous data is modeled in ontologies, and existing stream reasoning techniques
are used to perform the advanced data stream processing [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>In di erent IoT applications domains, relevant information about the
context regularly changes. For example, in healthcare, the information contained in
a patient's Electronic Health Record (EHR) continuously evolves throughout a
patient's hospital stay. In the smart cities domain, changing contextual
information heavily impacts applications such as tra c management. This information
updates on a regular basis, as it includes unavailable tra c routes due to road
works, current music or sporting events, whether it is a holiday or not, etc.</p>
      <p>The application context has an in uence on how the components of an IoT
platform process and actuate on the generated sensor data. This context directly
impacts the sensors of which the observations should be monitored in detail by
the streaming components, and possibly ltered for further processing by other
platform components. For example, a patient's diagnosis implies which sensors
in his/her hospital room require special attention, while blocked tra c roads
impact which intersection tra c streams should be closely monitored.</p>
      <p>
        In existing semantic IoT platforms, the con guration of queries that run on
the streaming components is a manual, labor-intensive task. To deal with
context changes, two approaches are possible. The rst approach uses xed generic
queries. These queries reason on all sensor observations, to derive in real-time
which are the relevant sensors, and which observations of these sensors should be
ltered, given the current context. In this way, the queries should not be updated
when the context changes. However, ontologies in IoT domains are typically
complex. This requires expressive reasoning, which is computationally expensive [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
This might imply problems in a real-time system, especially when the
component monitors many sensors, or when a high query frequency is required. The
second approach is to run queries that lter the individual sensors that are
relevant with the given context. These queries require less to no real-time reasoning,
which solves the issues of the rst approach. However, designing and recon
guring them should be done manually upon each context change. This is highly
impractical and infeasible if this needs to be maintained for a full- edged IoT
network, such as in a hospital. Hence, this approach is almost never applied.
      </p>
      <p>To solve the presented issues, this paper presents the DIVIDE system. In
general, DIVIDE can be seen as an additional component for a semantic IoT
platform, which allows to derive relevant queries for the platform's streaming
components, based on the context and a de ned use case goal. These queries are
derived by performing reasoning when the application context changes. Hence,
complex ontology concepts can be ltered in real-time from the observations of
the relevant sensors, without the need to perform any real-time reasoning on
all data. As DIVIDE is able to adaptively derive the individual, newly relevant
queries when the context changes, it actually removes the complexity issues of
the rst approach by applying the second approach in an automated way.</p>
      <p>The remainder of this paper is organized as follows. In Section 2, related work
is discussed. Section 3 explains all details of the DIVIDE system. The set-up and
results of the system evaluation are presented in Sections 4 &amp; 5. These results
are further discussed in Section 6. Finally, Section 7 concludes the paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        To deal with the presented challenges of the IoT, multiple platforms exist that
adopt di erent Semantic Web technologies [
        <xref ref-type="bibr" rid="ref13 ref17 ref22 ref6">22,6,13,17</xref>
        ]. Most of these platforms
consist of both stream processing components and semantic reasoning
components. They all use di erent existing technologies for these components, but all
have in common that the con guration of queries on the streaming components
is not automated in an adaptive and context-aware way.
      </p>
      <p>
        Stream Reasoning is the research area that focuses on the adoption of
Semantic Web technologies for streaming data [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Di erent RDF Stream Processing
(RSP) engines exist [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], such as C-SPARQL, CQELS, and Yasper. These engines
require the registration of a set of xed queries, which are used to continuously
lter the streaming data in real-time. Recently, a unifying semantic query model,
RSP-QL, has been designed by the W3C RSP Community Group [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        To infer new knowledge from the data, RSP engines try to incorporate
semantic reasoning techniques. The complexity of these techniques depends on the
expressivity of the underlying ontology [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Di erent ontology languages exist,
ranging from RDFS to OWL 2 DL, with increasing expressivity.
      </p>
      <p>
        Existing RSP engines support at most RDFS reasoning [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. To perform
more expressive reasoning, dedicated semantic reasoners exist. Examples are
RDFox [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and VLog [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], which are OWL 2 RL reasoners. OWL 2 RL contains
all constructs that can be expressed by simple Datalog rules. By design, these
engines are not able to handle streaming data. By adopting techniques from RSP
engines such as windowing, this could be possible. However, reasoning complexity
may be too high to provide real-time answers to high velocity data streams [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
StreamQR [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] is an alternative approach that rewrites continuous RSP queries to
multiple parallel queries, supporting ontologies expressed in the E LHIO logic.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>DIVIDE System</title>
      <p>The goal of DIVIDE is the context-aware, adaptive derivation of continuous
queries running on the stream processing components of a semantic IoT
platform, ltering (possibly complex) ontology concepts from the IoT data streams,
without requiring real-time reasoning. This section details the DIVIDE system
step by step, but rst starts with the introduction of a running example, that
will be used throughout the remainder of this paper.
3.1</p>
      <sec id="sec-3-1">
        <title>Running Example</title>
        <p>In a pervasive health context, smart hospitals of the future consist of
ambientintelligent care rooms. These rooms are equipped with many IoT enabled devices,
which contain sensors that continuously generate data. Examples are
environmental sensors (e.g., light and sound sensors) and body sensors (e.g., for heart
rate). Moreover, the existence of intelligent smart home devices allows to control
and automate the lighting, room temperature, and much more.</p>
        <p>A smart hospital typically has a set of medical domain knowledge which is
spread out in a back-end database network. This includes, among others, known
diagnoses and corresponding medical symptoms, i.e., sensitivities. For example,
it may state that a concussion diagnosis implies sensitivities to light and sound,
with a maximum exposure to values of respectively 170 lumen and 30 decibels.
Moreover, all information the hospital knows about a patient, e.g., his diagnosis,
is contained in the patient's EHR. These EHRs are also stored in this database
network, as well as other context information about room set-up, care sta etc.</p>
        <p>Consider a semantic IoT platform set-up in a smart hospital that consists of a
back-end database network, and a local processing device in each room. Assume
that the domain knowledge &amp; context information, including EHRs, is available
from a knowledge base on a central server, accessing this database network.</p>
        <p>To lter all data generated by the sensors in the room, each local device runs
an RSP engine. The relevant sensors that should be monitored in each room, and
thus the relevant continuous RSP queries, depend fully on the context: which
patient is accommodated in the room, what his diagnosis is, what sensitivities
this diagnosis implies, and what thresholds are associated to these sensitivities.
Moreover, changes to the context occur frequently. Examples are updates to a
patient's EHR, or changes in room occupation. From the viewpoint of a hospital
room, this may imply other relevant queries. Therefore, to automatically and
adaptively derive the relevant RSP queries based on the context, DIVIDE can be
used. Speci cally, DIVIDE will look for all queries that lter observations which
require a certain action, corresponding to a crossed threshold. This action will
imply the automatic control of local devices in uencing the involved property.
Locally handling the action and propagating it into the system, e.g., sending it to
the back-end to notify a nurse of the event, is left out of scope for this example.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Building Blocks</title>
        <p>
          DIVIDE is built upon several existing building blocks, which are detailed below.
Ontology For the running example, the medical domain knowledge is described
by the CareRoomMonitoring ontology of the ACCIO continuous care
ontology [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], including all imports of other ACCIO ontologies and external ontologies
such as SAREF [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], SOSA and SSN [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].4 This ACCIO ontology contains a
pattern that links observations with certain types of actions. It de nes four generic
ontology classes: Observation, Symptom5, Fault and Action. To illustrate how
these are linked, consider the following ontology de nitions:
4 The corresponding ontology les are available at https://github.com/IBCNServices/
DIVIDE/tree/master/saw2019/ontology. This page also contains a gure and
additional explanation about the described ontology observation pattern.
5 Note the di erence between Symptom (e.g. ThresholdSymptom) and MedicalSymptom.
LLiigghhttIInntteennssiittyyAAbboovveeTThhrreesshhoollddFFaauulltt v FOabusletrvation and
( hasSymptom some LightIntensityAboveThresholdSymptom ) and
( madeBySensor some ( isSubsystemOf some ( hasLocation some
( isLocationOf some (
( hasDiagnosis some ( hasMedicalSymptom some SensitiveToLight ))
and ( hasRole some PatientRole ))))))
LightIntensityAboveThresholdSymptom
        </p>
        <p>ThresholdSymptom and ( forProperty some LightIntensity )
HandleHighLightInRoomAction v AboveThresholdAction
HandleHighLightInRoomAction LightIntensityAboveThresholdFault
and ( madeBySensor some ( isSubsystemOf some ( hasLocation some</p>
        <p>( isLocationOf some LightingDevice ))))
HandleHighLightInRoomAction</p>
        <p>
          AboveThresholdAction and ( forProperty some LightIntensity )
Logic and Reasoner DIVIDE uses the rule-based Notation3 Logic (N3) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
N3 is a superset of RDF/Turtle [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], which means that the RDF/Turtle
representation of the ACCIO ontology is valid N3. A reasoner supporting N3 can reason
within the OWL pro le OWL 2 RL [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. DIVIDE uses the EYE reasoner, which
runs in a Prolog virtual machine [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
        </p>
        <p>To run the EYE reasoner, a goal can be de ned that tells EYE for which
RDF/Turtle triples it should look for evidence. This goal is de ned as a rule,
which serves as a lter for EYE. When EYE reasons on its N3 inputs, it
constructs a proof where this rule is the last rule applied. Within DIVIDE, the
reasoner goal should specify the ontology concept that the eventual queries should
lter, which in real-time would require reasoning to derive from an Observation.</p>
        <p>For the running example, the goal is to lter observations which require an
action corresponding to a crossed threshold. Hence, it is de ned as follows.</p>
        <p>{ ?x a AboveThresholdAction . } =&gt; { ?x a AboveThresholdAction . } .
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Sensor Query Rule</title>
        <p>To use DIVIDE to derive the queries that need to run on a stream
processing engine, a generic formalism has been designed. This formalism de nes the
generic pattern of such a query, together with information on when and how to
instantiate it. Each such description is called a sensor query rule.</p>
        <p>
          The presented formalism builds further on SENSdesc [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], which is the result
of previous research. The theoretical SENSdesc work has initiated the idea and
format to describe sensor queries in such a way that they can be combined with
formal reasoning to retrieve queries contributing to a user de ned goal. In this
paper, this format is further generalized and improved, in order to be practically
usable for generic use cases in DIVIDE.
        </p>
        <p>A sensor query rule consists of three parts. To explain this with an example,
consider the sensor query rule for the running example as de ned in Listing 1.
Relevant Context In the antecedence of the rule, the context in which the
query might become relevant is described in generic fashion. In Listing 1, this
part is described in lines 1{10. It looks for a patient who has a certain diagnosis</p>
        <p>Listing 1. Sensor query rule for the running example. Pre x declarations are omitted.
1 { ?p DUL : hasRole [ a RoleCompetenceAccio : PatientRole ] ;
2 DUL : hasLocation ?l ;
3 CareRoomMonitoring : hasDiagnosis [
4 CareRoomMonitoring : hasMedicalSymptom [
5 SSNiot : hasThreshold [
6 DUL : hasDataValue ? threshold ;
7 SSNiot : isThresholdOnProperty [ a ? prop ] ] ] ] .
8 ? sensor a sosa : Sensor ; sosa : observes [ a ? prop ] ;
9 SSNiot : isSubsystemOf [ DUL : hasLocation ?l ] .
10 ? prop rdfs : subClassOf sosa : ObservableProperty . }
11 =&gt;
12 { _:q a sd : Query ; sd : pattern : pattern -1 ;
13 sd : inputVariables (("? th " ? threshold ) ("? s" ? sensor ) ("? prop " ? prop )) ;
14 sd : outputVariables (("? v" _:v) ("? o" _:o)) .
15
16 _:o a sosa : Observation ; sosa : madeBySensor ? sensor ; sosa : hasResult
17 [ a SSNiot : QuantityObservationValue ; DUL : hasDataValue _:v ] ;
18 SSNiot : hasSymptom [
19 a SSNiot : ThresholdSymptom ; ssn : forProperty [ a ? prop ] ] . } .
20
21 : pattern -1 a sd : QueryPattern ; sh : prefixes : prefixes ; sh : construct """
22 CONSTRUCT { ?o a CareRoomMonitoring : AboveThresholdAction ;
23 ssn : forProperty ? prop . }
24 FROM NAMED WINDOW : win ON &lt;http :// idlab . ugent . be / grove &gt;
25 [ RANGE PT1S TUMBLING ]
26 WHERE { WINDOW : win {
27 ?o a sosa : Observation ; sosa : madeBySensor ?s ;
28 sosa : hasResult [ DUL : hasDataValue ?v ] ; sosa : resultTime ?t ;
29 General : hasId [ General : hasID ? id ] .
30 FILTER ( xsd : float (? v) &gt; xsd : float (? th )) } }
31 ORDER BY DESC (? t) LIMIT 1""" .</p>
        <p>that is linked to a MedicalSymptom. This MedicalSymptom (sensitivity) needs
to be linked with a threshold on a speci c property, e.g., LightIntensity. If
there exists a sensor in the same room that is observing that speci c property,
the query described in the next step might be relevant.</p>
        <p>Generic Query In the rst part of the rule's consequence, the generic query
is described. This query is written in RSP-QL format, and is de ned using the
SHACL standard. In addition, the query's input variables are de ned, which
need to be instantiated to make the query speci c for the relevant context. This
will happen through the rule evaluation during the query derivation.</p>
        <p>In Listing 1, lines 12{14 and 21{31 describe the generic query. Lines 22{
31 describe the actual RSP-QL query that should run on an RSP engine. The
WHERE clause speci es that the query lters observations made by a certain
sensor (?s), that are higher than a certain threshold (?th). For any ltered
observation individual, new triples are constructed specifying that it is of type
AboveThresholdAction linked to a certain property (?prop). Note that this
class exactly matches the class speci ed in the reasoner's goal in Section 3.2. This
makes sense, as the goal is used to specify the ontology concepts that the queries
need to lter. If this certain linked property is for example LightIntensity, it
follows from the ontology de nitions in Section 3.2 that this is equivalent to
a HandleHighLightInRoomAction. In addition to the RSP-QL query, line 13</p>
        <p>ONTOLOGY
PREPROCESSING</p>
        <p>ONTOLOGY</p>
        <p>PREPROCESSED</p>
        <p>ONTOLOGY
(IN EYE IMAGE)
of the sensor query rule de nes that the speci c sensor, threshold and
property variables should be substituted into the query to instantiate it. During the
query derivation process, the actual values for these variables will depend on the
matching query context de ned in the rule's antecedence.</p>
        <p>Ontology Consequences The second part of the rule's consequence describes
the e ects of a query result. A result is obtained when the rule's antecedence
holds, and an instantiated version of the query actually lters an observation.
This part de nes the consequences of this observation in terms of the ontology.</p>
        <p>In Listing 1, this part is in lines 16{19. If a sensor observation above a
dened threshold is ltered, represented by the blank node :o, this Observation is
linked to a ThresholdSymptom for the considered property. For LightIntensity,
this is equivalent with a LightIntensityAboveThresholdSymptom.</p>
        <p>Note that the sensor query rule of the running example is generic in the
sense that it can be used for any property that is threshold-based, as all steps
use the variable ?prop. In this way, the rule should only be de ned once, in
order to be used generically in a hospital context. Moreover, when de ning the
context for the query derivation, it should not be explicitly stated that a patient
is sensitive to this property. By de ning the diagnosis of a patient, the relevant
ontology de nitions and sensor query rule will enable a rule-based reasoner to
automatically derive the associated sensitivities.
3.4</p>
      </sec>
      <sec id="sec-3-4">
        <title>Context-Aware Query Derivation with DIVIDE</title>
        <p>Given the generic sensor query rule de ned in the previous section, the DIVIDE
system can be used to automatically derive relevant RSP queries in a
contextaware fashion. Figure 1 shows the di erent system components, including their
inputs and outputs. Two components can be distinguished: apart from the actual
query derivation, DIVIDE can rst be used to preprocess the ontology.
Ontology Preprocessing The domain ontology is considered not to change
throughout the lifetime of the application, in contrast with the context data.
Therefore, this ontology can be preprocessed upfront by DIVIDE using the EYE
reasoner, in order to speed up the actual query derivation process.</p>
        <p>
          The preprocessing process consists of three steps. First, an N3 copy of the full
ontology is created. Second, specialized ontology-speci c rules are created from
the original rules taken from the OWL 2 RL pro le description6. Starting the
EYE reasoning process from these specialized rules will reduce the computational
complexity of the reasoning [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Third, an image of the EYE reasoner, which has
already loaded the ontology and specialized rules, is compiled within Prolog. In
this way, they do not need to be loaded into the reasoner each time it is called.
Query Derivation The DIVIDE query derivation process is called each time
a part of the context in the central knowledge base changes, e.g., the context
related to a speci c patient or room in the hospital use case.
        </p>
        <p>The rst process step starts from the EYE Prolog image compiled in the
preprocessing step. It reads in the considered context, the sensor query rule and
the reasoner goal. Given these inputs, EYE constructs a proof that derives all
instances of the ontology concept de ned in the goal. To get from the context
to the goal, the evaluation of the sensor query rule is crucial. If the antecedence
of the rule holds one or multiple times, it means that the rule can be evaluated.
If the triples in the rule's consequence also allow the reasoner to derive the
ontology concept de ned in the goal, the rule will actually be evaluated for the
antecedence's context and will appear in the proof. This means that the generic
query will also be evaluated, with the query's input variables being instantiated.</p>
        <p>Once the proof has been constructed by EYE, the second step looks for all
queries in the proof. This is done by a simple reasoning step in EYE, looking for
all proof steps that include the generic pattern in lines 12{13 of Listing 1.</p>
        <p>In a third and nal step, the system transforms the generic RSP-QL query,
de ned in the sensor query rule, into an instantiated query. This happens for each
pattern extracted from the proof in step 2, through another forward reasoning
step with EYE. As such, the system outputs all queries that lter the ontology
concept de ned in the goal: if this query lters an observation, it can immediately
be concluded that this observation is an instance of this concept, without the
need to perform the reasoning step anymore. This holds as long as that part of
the context does not change. When it does change, the DIVIDE query derivation
process should run again to (possibly) update the relevant queries.</p>
        <p>Considering the running example, the goal of the EYE reasoner is to
derive instances of the concept AboveThresholdAction. To do so, it follows from
the de nitions in Section 3.2 that the reasoner will { among others { try to
look for individuals of the subclass HandleHighLightInRoomAction. To
derive that an Observation individual is of this type, the reasoner requires {
among other triples { a LightIntensityAboveThresholdSymptom linked to the
Observation via the hasSymptom object property. This is equivalent to the
second part of the consequence of the sensor query rule in Listing 1 (lines
16{19). Hence, if all other requirements are ful lled to derive an instance of
HandleHighLightInRoomAction, the rule will be evaluated for each situation in
the input context where the antecedence holds for ?prop being LightIntensity.
6 https://www.w3.org/TR/owl2-pro les/#OWL 2 RL</p>
        <p>For example, in CareRoomMonitoring, the Concussion diagnosis is linked
to a sound sensitivity with threshold 30, and a light sensitivity with threshold
170. Consider the context of a hospital room consisting of a patient diagnosed
with concussion, containing a light sensor A0, and at least one lighting device.
For this context, the output of the query derivation process will contain a query
ltering observations of sensor A0 higher than 170. If the room also contains a
sound sensor A1 and at least one device in uencing the room's sound level, the
output will also contain a query ltering observations of sensor A1 above 30. If
a new patient is brought into the room that has a di erent diagnosis with other
sensitivities, rerunning the query derivation process will no longer output these
queries, but others depending on the exact context and ontology de nitions.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Evaluation Set-up</title>
      <p>In this section, the DIVIDE system is evaluated. Three evaluations are
performed, which all consider the use case and ontology of the running example
described in Section 3.1. The context considered in each evaluation is one
singleperson hospital room, containing a patient diagnosed with concussion.
4.1</p>
      <sec id="sec-4-1">
        <title>DIVIDE Performance Evaluation</title>
        <p>To assess the performance of the DIVIDE system presented in Section 3.4, the
duration of the ontology preprocessing and query derivation processes is
measured. The evaluation considers the described evaluation context, with 10 sensors
in the concussion patient's room, including one light sensor and one sound
sensor. The reasoner goal and sensor query rule are as described in Section 3.2 and
Listing 1. Given these inputs, two queries will be outputted: one for the light
sensor and one for the sound sensor.7 The evaluation is performed on a device with
a 2800 MHz quad-core Intel Core i5-7440HQ CPU and 16 GB DDR4-2400 RAM.
4.2</p>
        <p>
          Comparison of DIVIDE with Real-Time Reasoning Approaches
The DIVIDE approach allows the detection of complex events in the sensor
stream, without performing real-time reasoning. Alternatively, one could use
other traditional approaches, which do require real-time reasoning. Therefore,
the real-time ltering approach used in DIVIDE is compared with two real-time
reasoning approaches, both using the same reasoning pro le as DIVIDE, i.e.,
OWL 2 RL. Both approaches use RDFox, as this is known as one of the fastest
OWL 2 RL reasoning engines [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. For each approach, the goal is to detect any
AboveThresholdAction individual in the sensor stream.
        </p>
        <p>The following set-ups are considered8, which are visualized in Figure 2:
7 All evaluation les (scripts, inputs &amp; outputs) are available at https://github.com/</p>
        <p>IBCNServices/DIVIDE/tree/master/saw2019/evaluations/divide-performance.
8 The queries running on each set-up are available at https://github.com/
IBCNServices/DIVIDE/tree/master/saw2019/evaluations/real-time-comparison.</p>
        <p>For the windowing, Esper (https://www.espertech.com/esper) is used.</p>
        <p>C-SPARQL
DIVIDE QUERIES</p>
        <p>STREAMFOX
WINDOWER RDFOX</p>
        <p>QUERY 1
PROCESSING</p>
        <p>QUEUE</p>
        <p>QUERY 2</p>
        <p>C-SPARQL</p>
        <p>RDFOX
QUERY 1
(MODIFIED)</p>
        <p>PROQCUEESUSEING QUERY 2
(a) DIVIDE (1)
(b) StreamFox (2)</p>
        <p>
          (c) C-SPARQL &amp; RDFox (3)
1. DIVIDE approach using C-SPARQL without reasoning: regular
CSPARQL engine [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. No ontology or context data is loaded into the engine,
and no reasoning is performed during the continuous query evaluation. The
two RSP-QL queries outputted by DIVIDE (see Section 4.1) are translated to
the C-SPARQL syntax and running continuously on the C-SPARQL engine,
each with their own logical tumbling window of 1 second.
2. StreamFox: streaming version of RDFox. Consists of one engine that pipes
Esper for windowing with RDFox for reasoning, via a processing queue.
Initially, the ontology and context data are loaded into the data store of the
RDFox engine, and a reasoning step is performed. Two generic SPARQL
queries are registered. Query 1 looks for observations above a threshold
within the valid context, and creates a ThresholdSymptom for them; it can
be seen as the SPARQL alternative for the generic sensor query rule in
Listing 1. Query 2 retrieves any (derived) AboveThresholdAction individual.
Windowing is performed with a logical tumbling window of 1 second. On each
window trigger, the window content is added as one event to a processing
queue. When available, RDFox takes an event from the queue, incrementally
adds it to the RDFox data store (i.e., it performs incremental reasoning
with the event scheduled for addition), and executes the registered queries
in order. If query 1 yields a non-empty result, this is incrementally added to
the store, before query 2 is executed. Finally, RFDox performs incremental
reasoning with the event scheduled for deletion (i.e., incremental deletion).
3. C-SPARQL piped with (non-streaming) RDFox: Initially, the RDFox
data store contains the ontology and context data, and a reasoning step is
performed. For the C-SPARQL engine, query 1 of set-up 2 is modi ed to
run as a continuous C-SPARQL query on a logical tumbling window of 1
second of the observation stream, and on the ontology and context triples.
C-SPARQL does not perform reasoning during the query evaluation. It sends
each query result to the event stream of the non-streaming RDFox engine,
which adds it to a processing queue. Upon processing time, it
incrementally adds the event to the data store, executes query 2 of set-up 2, and
incrementally deletes the event from the data store.
        </p>
        <p>The amount of sensors in the context depends on the evaluated scenario. During
each scenario run, every sensor produces one observation per second, for a
duration of 25 seconds. In all evaluated scenarios, there is always exactly one light
sensor that consistently produces a value higher than 170 lumen, which is the
threshold for concussion patients. Hence, this Observation will always be an
AboveThresholdAction, given the considered context. Regardless of the exact
amount of sensors, no observations by any other sensor are ltered by any query.</p>
        <p>For all evaluated scenarios, the total execution time metric has been
calculated for each window. This time refers to the time starting from the Esper
window trigger until the moment where the found AboveThresholdAction
individuals are outputted by the corresponding query. For the DIVIDE set-up 1,
the maximum time over the two ltering queries is taken. For set-up 2 and 3,
this total execution time ends when RDFox yields the results of query 2.</p>
        <p>All evaluations for each set-up are run on a processing device suited for the
IoT: an Intel NUC, model D54250WYKH. It has a 1300 MHz dual-core Intel
Core i5-4250U CPU (turbo frequency 2600 MHz) and 8 GB DDR3-1600 RAM.
4.3</p>
      </sec>
      <sec id="sec-4-2">
        <title>Real-Time DIVIDE Performance on a Raspberry Pi</title>
        <p>To evaluate how well the ltering approach of the DIVIDE system performs on
a low-end device, the DIVIDE set-up 1 of Section 4.2 is also evaluated on a
Raspberry Pi 3, Model B. This Raspberry Pi model has a Quad Core 1.2GHz
Broadcom BCM2837 64bit CPU, 1GB RAM and MicroSD storage. Besides the
physical machine, the same evaluation conditions as in Section 4.2 apply.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Evaluation Results</title>
      <p>This section presents the results for the evaluation set-ups described in Section 4.
All results are averaged over 30 runs, excluding 3 warm-up and 2 cool-down runs.
5.1</p>
      <sec id="sec-5-1">
        <title>DIVIDE Performance Evaluation</title>
        <p>Comparison of DIVIDE with Real-Time Reasoning Approaches</p>
        <p>(a) Ontology preprocessing</p>
        <p>(b) Query derivation
sensors, while the C-SPARQL{RDFox pipe set-up goes up to 228 ms. No results
were measured for StreamFox for more than 20 sensors, due to the infeasibility
of properly measuring the exponentially increasing total execution times.</p>
        <p>To further inspect the set-up behaviors over the engines' runtime, Figure 5
shows a timeline comparing the total execution time, averaged per window
number, for a context with 20 sensors. For StreamFox, this shows that for each
runtime, the total execution times also exponentially increase over the windows.
This shows the accumulation of the event processing: for window 1, this time is
only 486 ms; for window 10, it is 3430 ms; and for window 20, it is 15844 ms.
For the other two set-ups, the total execution times are somewhat higher at the
start, caused by starting up C-SPARQL. Note that the last windows are omitted
from the results, as StreamFox did not nish the processing of these windows.
5.3</p>
      </sec>
      <sec id="sec-5-2">
        <title>Real-Time DIVIDE Performance on a Raspberry Pi</title>
        <p>Figure 3 shows the results of the evaluation of the DIVIDE set-up on the
Raspberry Pi. It shows a distribution of the total execution times for scenarios with
di erent amounts of sensors. Going from 1 to 80 sensors, there is a small increase
in average total execution time from 38 ms to 81 ms. In general, there are some
outliers with a higher total execution time, especially for higher amounts of
sensors. Comparing the results with the results in Figure 4, where the device was
the only di erence in evaluation conditions, the average total execution times
on the Raspberry Pi are always a factor 3 to 4 times those on the Intel NUC.
An important advantage of the DIVIDE system, is the removal of the need to
perform real-time reasoning. The evaluation results in Sections 5.2 &amp; 5.3 prove
that this advantage has a signi cantly positive impact on the total execution
time to derive the conclusions relevant for the use case. Applied on the running
healthcare example, the usage of DIVIDE in combination with the well-known
RSP engine C-SPARQL signi cantly outperforms the evaluated alternatives.</p>
        <p>Before discussing these results in more detail, note that they show the
performance of the compared set-ups for multiple amounts of sensors. In all set-ups,
each individual measure is calculated on the window content of a 1 second
tumbling window. As each sensor had an event rate of 1 observation per second, the
amount of sensors always equaled the amount of observations in each window.
Hence, apart from the amount of sensors in the context, the results generalize
to other situations with more or less sensors, but a lower or higher event rate,
leading to the same amount of incoming observations per second. Therefore, they
also give an idea of the general throughput of each set-up.</p>
        <p>
          Considering the StreamFox set-up, the results show how its performance
degrades with an increasing amount of sensors. Inspecting the timeline for 20
sensors in Figure 5, it is clear that the total execution time exponentially
increases as the scenario goes on. This happens because the RDFox reasoning
time increases per evaluated window. When the amount of observations in the
window content increases, the initial incremental reasoning step to add the event
takes longer. In incremental reasoning, the most expensive operation is however
the removal of facts [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Hence, the duration of the incremental deletion step
increases the most. As this step happens after query 2 has outputted the actions,
it does not in uence the total execution time for that event. However, for a larger
amount of sensors, the total processing time of one event, including the removal,
surpasses the 1 second threshold. Hence, when the next window triggers and the
event is added to the processing queue, its processing cannot immediately start.
In this way, the removal of the previous event impacts the total execution time
of this new event. As the scenario goes on, this impact accumulates, and the
waiting time for windowed events in the queue gets longer and longer.
Regarding this, two things should be noted. First, as a consequence, the average total
execution times over the StreamFox runtime, as reported in Figure 4a, are highly
dependent on the amount of consecutive non-empty windows, i.e., the scenario
duration. This was 25 seconds for this evaluation, but increasing or decreasing
this will also increase or decrease these average values. Second, the large
increase in reasoning times, especially for the event removal, is caused by the large
amount of rules extracted from the ACCIO ontology by RDFox. However, this
is realistic for complex IoT domains such as healthcare, where large bodies of
complex domain knowledge are required to correctly analyze the sensor streams.
        </p>
        <p>Inspecting the results of the set-up piping C-SPARQL with RDFox, the main
conclusion is that this set-up does not scale as badly with an increasing amount
of sensors. An increase in total execution time is noticeable, but slightly for up
to 60 sensors. Nevertheless, it consistently takes at least 6 times longer than with
the DIVIDE set-up. Important to note here is that the RDFox execution time
does not depend on the amount of sensor observations, as only one observation
is ltered by C-SPARQL in all evaluation cases. The main di erence is in the
C-SPARQL query execution times, which take longer because they are executed
on a model that also contains all triples in the context and the ontology.</p>
        <p>In contrast to the two alternatives, the DIVIDE set-up on the Intel NUC does
almost not su er from an increasing amount of sensors. This is because more
sensors in the context do not in uence the queries derived by DIVIDE, given no
other context changes. As the queries are only executed on the streams, do not
take into account ontology or context data, and not require reasoning, the impact
on the actual query execution time is also minimized. In addition, the results in
Figure 6 show that C-SPARQL can also run the DIVIDE queries e ciently on a
low-end device like a Raspberry Pi. The total execution times are larger than on
the Intel NUC, but for up to 80 sensors, most still remain below 150 ms. This is
an advantage when deploying the system, for example in all rooms of a hospital,
as no large scale investment in expensive high-end hardware is required.</p>
        <p>The reasoning required by DIVIDE is not performed on the observations
stream, but only on the ontology and context data. The query derivation
process is triggered by context changes, which typically have a frequency that is
several factors smaller than the observation data frequency. Hence, the amount
of reasoning steps is signi cantly reduced. The evaluation results in Section 5.1
show that such a query derivation takes approximately 1.2 seconds for a realistic
context of a hospital room with 1 patient and 10 sensors, on a normal pc. This
time is of course highly dependent on the input data, but optimizations are
always possible. By doing the ontology preprocessing, the query derivation process
duration can also be largely reduced; for this paper's example, this reduction was
approximately 77 %. Note as well that when using DIVIDE in a real set-up, this
reasoning will be performed on a central server with many resources, introducing
possibilities for parallelization and process acceleration. By using EYE and N3,
the exibility also exists to extend the rule set beyond OWL 2 RL.</p>
        <p>Importantly, the usage of DIVIDE also has other bene ts that do not relate
to execution times. Being able to locally derive certain conclusions, e.g., actions
the system should take, gives an IoT set-up the local autonomy to react on
certain events in a responsive way. Moreover, in contrast to other set-ups, no
context information should be known and kept up to date locally. This removes
synchronization issues, but also avoids potential privacy and security concerns.</p>
        <p>The DIVIDE system produces RSP-QL queries for a given context, which
can be translated to the correct RSP engine syntax to continuously run on it.
By adding a module to DIVIDE that automatically calls the query derivation
process upon context changes and performs this translation, the whole query
con guration of RSP engines could be fully automated and adaptive. Hence,
with DIVIDE, this will no longer be a manual, labor-intensive task.
7</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>In this paper, the DIVIDE system is presented, which can serve as a component
of a semantic IoT platform. The main goal of DIVIDE is to automatically derive
queries for an IoT platform's stream processing components, which lter the data
streams, in an adaptive and context-aware way. Whenever the application
context changes, DIVIDE can derive the queries that lter the observations of
interest for the use case, based on this changed context. By performing the reasoning
upon context changes, relevant sensors &amp; their observations can be ltered
without the need to perform reasoning while evaluating the continuous queries. The
evaluation results show that this approach allows to greatly reduce the real-time
processing times, and scales much better when the amount of events or sensors
in the data stream increases. In this way, the real-time ltering can be performed
e ciently on low-end devices. When used in a IoT platform, DIVIDE can
divide the amount of queries that need to be deployed at any given time and thus
conquer the scalability &amp; performance issues of reasoning on large data streams.</p>
      <p>Future work consists of further generalizing the sensor query rule
description, to reduce the con guration required for using DIVIDE. One possibility is
to integrate dynamic observation patterns into the queries, that could be part of
the stream metadata. In addition, it should be researched how the query
instantiation could be extended to other query parameters, such as the window
parameters, possibly by integrating context metadata such as device information.
Acknowledgements. F. Ongenae is funded by a UGent BOF postdoc grant.
Part of this research was funded by the FWO SBO grant 150038 (DiSSeCt).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aggarwal</surname>
            ,
            <given-names>C.C.</given-names>
          </string-name>
          , et al.:
          <article-title>The Internet of Things: A survey from the data-centric perspective</article-title>
          .
          <source>In: Managing and mining sensor data</source>
          , pp.
          <volume>383</volume>
          {
          <fpage>428</fpage>
          . Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Arndt</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et al.:
          <article-title>Improving OWL RL reasoning in N3 by using specialized rules</article-title>
          .
          <source>In: OWLED 2015</source>
          . pp.
          <volume>93</volume>
          {
          <fpage>104</fpage>
          . Springer (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Arndt</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et al.:
          <article-title>SENSdesc: Connect Sensor queries and Context</article-title>
          .
          <source>In: BIOSTEC 2018</source>
          . pp.
          <volume>1</volume>
          {
          <issue>8</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Barbieri</surname>
            ,
            <given-names>D.F.</given-names>
          </string-name>
          , et al.:
          <article-title>C-SPARQL: a continuous query language for RDF data streams</article-title>
          .
          <source>International Journal of Semantic Computing</source>
          <volume>4</volume>
          (
          <issue>1</issue>
          ),
          <volume>3</volume>
          {
          <fpage>25</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Berners-Lee</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , et al.:
          <article-title>N3logic: A logical framework for the world wide web</article-title>
          .
          <source>Theory and Practice of Logic Programming</source>
          <volume>8</volume>
          (
          <issue>3</issue>
          ),
          <volume>249</volume>
          {
          <fpage>269</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Bonte</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , et al.:
          <string-name>
            <surname>Streaming</surname>
            <given-names>MASSIF</given-names>
          </string-name>
          :
          <article-title>Cascading Reasoning for E cient Processing of IoT Data Streams</article-title>
          .
          <source>Sensors</source>
          <volume>18</volume>
          (
          <issue>11</issue>
          ),
          <volume>3832</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Calbimonte</surname>
            ,
            <given-names>J.P.</given-names>
          </string-name>
          , et al.:
          <article-title>Query rewriting in RDF stream processing</article-title>
          .
          <source>In: ESWC 2016</source>
          . pp.
          <volume>486</volume>
          {
          <fpage>502</fpage>
          . Springer (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Compton</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , et al.:
          <article-title>The SSN ontology of the W3C Semantic Sensor Network Incubator Group</article-title>
          .
          <source>Web Semantics</source>
          <volume>17</volume>
          ,
          <issue>25</issue>
          {
          <fpage>32</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Cyganiak</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.
          <source>: RDF 1</source>
          .
          <article-title>1 concepts and abstract syntax</article-title>
          .
          <source>W3C Recomm</source>
          . (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Daniele</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , et al.:
          <article-title>Created in close interaction with the industry: the smart appliances reference (SAREF) ontology</article-title>
          .
          <source>In: FOMI 2015</source>
          . pp.
          <volume>100</volume>
          {
          <fpage>112</fpage>
          . Springer (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Dell'Aglio</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et al.:
          <article-title>RSP-QL semantics: A unifying query model to explain heterogeneity of RDF stream processing systems</article-title>
          .
          <source>IJSWIS</source>
          <volume>10</volume>
          (
          <issue>4</issue>
          ),
          <volume>17</volume>
          {
          <fpage>44</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Dell'Aglio</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et al.:
          <article-title>Stream reasoning: A survey and outlook</article-title>
          .
          <source>Data Science (Preprint)</source>
          ,
          <volume>1</volume>
          {
          <fpage>25</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Mileo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al.:
          <article-title>StreamRule: a nonmonotonic stream reasoning system for the semantic web</article-title>
          .
          <source>In: RR 2013</source>
          . pp.
          <volume>247</volume>
          {
          <fpage>252</fpage>
          . Springer (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Motik</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , et al.:
          <article-title>OWL 2 web ontology language pro les</article-title>
          .
          <source>W3C Recomm</source>
          . (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Nenov</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , et al.:
          <article-title>RDFox: A highly-scalable RDF store</article-title>
          .
          <source>In: ISWC 2015</source>
          . pp.
          <volume>3</volume>
          {
          <fpage>20</fpage>
          . Springer (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ongenae</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , et al.:
          <article-title>An ontology co-design method for the co-creation of a continuous care ontology</article-title>
          .
          <source>Applied Ontology</source>
          <volume>9</volume>
          (
          <issue>1</issue>
          ),
          <volume>27</volume>
          {
          <fpage>64</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Puiu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , et al.:
          <article-title>CityPulse: Large scale data analytics framework for smart cities</article-title>
          .
          <source>IEEE Access 4</source>
          ,
          <issue>1086</issue>
          {
          <fpage>1108</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Su</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , et al.:
          <article-title>Adding semantics to Internet of Things</article-title>
          .
          <source>Concurrency and Computation: Practice and Experience</source>
          <volume>27</volume>
          (
          <issue>8</issue>
          ),
          <year>1844</year>
          {
          <year>1860</year>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Su</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , et al.:
          <article-title>Stream reasoning for the Internet of Things: Challenges and gap analysis</article-title>
          .
          <source>In: WIMS</source>
          <year>2016</year>
          . ACM (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Urbani</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , et al.:
          <article-title>Column-oriented datalog materialization for large knowledge graphs</article-title>
          .
          <source>In: AAAI</source>
          <year>2016</year>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Verborgh</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.:
          <article-title>Drawing conclusions from linked data on the web: The EYE reasoner</article-title>
          .
          <source>IEEE Software 32(3)</source>
          ,
          <volume>23</volume>
          {
          <fpage>27</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Ye</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , et al.:
          <article-title>Semantic web technologies in pervasive computing: A survey and research roadmap</article-title>
          .
          <source>Pervasive and Mobile Computing</source>
          <volume>23</volume>
          ,
          <issue>1</issue>
          {
          <fpage>25</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>