=Paper= {{Paper |id=Vol-1620/paper11 |storemode=property |title=Rules and RDF Streams - A Position Paper |pdfUrl=https://ceur-ws.org/Vol-1620/paper11.pdf |volume=Vol-1620 |authors=James Anderson, Tara Athan, Adrian Paschke |dblpUrl=https://dblp.org/rec/conf/ruleml/AndersonAP16 }} ==Rules and RDF Streams - A Position Paper== https://ceur-ws.org/Vol-1620/paper11.pdf
     Rules and RDF Streams - A Position Paper

              James Anderson1 , Tara Athan2 and Adrian Paschke2
                                1
                                Datagraph, GmbH.
                               james@dydra.com,
                     WWW home page: http://dydra.com/
           2
             Corporate Semantic Web, Freie Universität Berlin, Germany
                    [taraathan|paschke]@inf.fu-berlin.de,
                    http://www.corporate-semantic-web.de/



       Abstract. We propose a minor extension of the Graph Store Protocol
       and the SPARQL syntax that should be sufficient to enable the appli-
       cation of rules to some kinds of RDF Streams (as defined by the the
       RDF Stream Processing W3C Community Group) to generate new RDF
       streams. The IRI identifying an RDF stream is extended with a query
       string field-value pair defining a sequence of windows of the stream. The
       extended IRI is passed to a data concentrator, which recasts the stream
       as a dynamic RDF dataset. A SPARQL query may then be applied to
       this dynamic dataset whenever it is updated, i.e. when the content of
       the next window as been fully received. The SPARQL query uses the
       CONSTRUCT form to generate a new element of the resultant RDF
       stream. The approach is illustrated with a prototypical example from
       the healthcare domain.


1     Business Case
RDF Graph Stores are increasingly used in the private and public sector to man-
age knowledge. In practice, although these Graph Stores can be dynamic, most
individual transactions involve either complete replacement or small changes.
When transactions are made by many users simultaneously or through streaming
from data sources, the rate of change can be quite fast, relative to the timescale
of query processing. Effectively delivering an up-to-date view (i.e. results of a
stored query) of a dynamic Graph Store in real time requires a different ap-
proach to query processing than that implemented in most RDF query engines
currently available.
    The RDF standards have not addressed dynamics to a significant extent,
other than to informally propose a name: “We informally use the term RDF
source to refer to a persistent yet mutable source or container of RDF graphs.
An RDF source is a resource that may be said to have a state that can change
over time. A snapshot of the state can be expressed as an RDF graph.” [1]
    The RDF Stream Processing Community Group3 is taking on a specialized
case of dynamic RDF called an RDF Stream, corresponding to the case of a
3
    https://www.w3.org/community/rsp/
2         James Anderson et al.

sequence of RDF datasets called “timestamped RDF graphs”, where a partic-
ular triple in the default graph has been designated as the timestamp triple.
A sequence of overlapping windows (contiguous subsequences) can, in general,
be considered as snapshots of a dynamic Graph Store4 such that whenever the
window shifts to a new position, a few stream elements expire from and a few
new stream elements are added to the dynamic Graph Store.

1.1     Business Drivers
Autonomous, distributed sources will eventually yield data in quantities limited
only by the aggregate bandwidth of wireless transmission media. It has been
estimated, the market for devices and services in support of this processing will
reach a value in excess of twelve digits by 2020 [2]. Based on the relative ex-
penditures by established telecommunication firms - such as AT&T or Deutsche
Telekom, where the software budget constitutes roughly one percent of their
gross income [3, 4], one can expect the software costs to reach ten figures, of
which database services constitute an undisclosed, but substantial portion.
    The task to purposefully process this data promises to be formidable. A ser-
vice which leverages the advantages of RDF data processing, to provide a mech-
anism for rule based integration, validation and evaluation of diverse streaming
data sources, will offer significant advantage.
    Particular drivers in this setting are the following:
    – develop a mechanism to augment assertions during query processing of dy-
      namic RDF Graph Stores, at basically the complexity of SPARQL construct
    – create means to express actions that should happen when a specified integrity
      constraint is violated by a dynamic RDF Graph Store
    – implement real-time processing of such features with good scaling properties
      in size and rate of change of the Graph Store
    – enable linked data to fullfil compliance requirements [5]
    – expand the query capabilities available to dynamic RDF Graph Store users,
      by extending standard query capabilities over streams.
    – enhance the management capabilities of dynamic RDF Graph Store publish-
      ers, by automating both internal updates and their propagation.
    – increase user and owner confidence in published RDF sources
    – increase dynamic RDF Graph Store customer satisfaction through mini-
      mizing latency associated with automatic updates and integrity-constraint
      checking

2      Technological Challenges
RDFS and OWL are the traditional technologies for expressing conceptual mod-
els for RDF data, and can in theory be used to augment assertions during query
processing, e.g. SPARQL under the RDFS- or OWL-entailment regime.
4
    Because blank nodes are shared between elements of an RDF stream, then the dy-
    namic Graph Store must also share blank nodes between versions if this correspon-
    dence is to hold.
                                                  Rules and RDF Streams         3

    Due to the Open World Assumption, RDFS and OWL models cannot be
used for validation of RDF according to a data model, except for the extreme
case that results in inconsistency.
    SPARQL queries express validation and integrity constraints well, but the
queries for these tasks are not intuitive, and thus are difficult to properly con-
struct and maintain. It is imperative that extensions to incorporate rules and
process data streams avoid compounding these difficulties.
    The challenge is to develop a language with the optimum satisfaction of the
following characteristics:

 – syntax that is both intuitive for the audience (close to RDF) and sufficiently
   expressive
 – has well-defined execution semantics
 – can be integrated into remote service control and data flows.
 – permits real-time performance, which requires efficient target data-set gener-
   ation (whether update diffs or inherent stream structure) and an activation
   network implementation. [6] [7] [8]
 – supports named graphs and RDF Datasets as static or dynamic sources
 – supports modularization
 – permits descriptions of reactions (e.g. to integrity constraint violations)


3   RDF-Stream Options

Numerous alternatives (see Table 1) have been proposed to query dynamic RDF
data. These include proposals for both streaming and versioned data. In order
to provide a service which supports queries of adequate complexity, we expect
we will need to provide the following combinations and variations:

 – treat queries as autonomous views in order to promote reuse;
 – separate query expressions from temporal attributes in order to permit com-
   binations;
 – permit temporal attributes to be computed as an aspect of query processing;
 – remain compatible with standard SPARQL in order to permit sharing and
   simplify development
 – remain compatible with standard RDF data models
 – permit named graphs in order to fulfil application requirements and permit
   compatibility with standard encodings
 – restrict the processing model to just essential components in order to limit
   the deployment and management complexity

    This perspective eliminates those proposals which either reify statements in
the data model or add a temporal term. It also argues against a processing model
which involves query re-writing and delegation. It also eliminates language ex-
tensions which encode temporal attributes as static values in special clauses. It
leads to the proposal for Dydra, to combine REVISION and WINDOW specifica-
tions for temporal properties to designate the target dataset, and to make the
4      James Anderson et al.

temporal state available through the function THEN (e.g. returns the time when
the processor had received all the elements in a stream window) and permit the
window or revision expression to contain variables, but otherwise to conform to
SPARQL 1.1.


4   Rule-based Options
There are several existing languages for applying rules to RDF.
SPARQL Inferencing Notation (SPIN), also called SPARQL Rules, is a W3C
   Member Submission5 as well as a “defacto industry standard”6 implemented
   by Top Braid. The Member Submission defines the SPIN syntax via a schema,
   but does not provide semantics. In particular, although the SPIN submission
   is claimed to be an alternative syntax for SPARQL, the transformation from
   SPIN to SPARQL has not been published. Further, although SPIN uses an
   RDF-based syntax, this alone is not sufficient to achieve rule semantics; a
   sufficiently expressive entailment regime must be defined (e.g. based on the
   as-yet unpublished transformation to SPARQL) that asserts the constructed
   source jointly with the queried sources.
Shape Expressions Language (ShEx) is specified in a W3C member submis-
   sion7 . It is intended to fill the same role for RDF graphs that the schema
   language Relax NG fills for XML, and its design is inspired by Relax NG,
   particular in its foundation on regular expressions. ShEx has both a compact
   and an RDF-based syntax, in the same way that Relax NG has a compact
   and an XML-based syntax. The execution semantics of ShEx is well-defined,
   but is of limited expressivity. In particular, ShEx does not support named
   graphs or RDF Datasets. ShEx precedents are strictly targeted REST [19];
   it is not clear whether ShEx is applicable to streams. ShEx does not support
   modularization, but this extension could be easily incorporated following
   the approach of Relax NG, or, better, a monotonic restriction of that ap-
   proach [20]. ShEx has no constructive capability, and so can only perform
   validation.
Shapes Constraint Language (SHACL) is intended as an successor and ex-
   tension of ShEx developed by the RDF Shapes Working Group.8 . SHACL
   does not support named graphs or RDF Datasets directly, but has an ad-
   vanced syntax that allows the use of SPARQL. SHACL does not support
   modularization; it is not obvious how to make this extension. Like ShEx,
   SHACL has no constructive capability, and so can only perform validation.
Semantic Web Rule Language (SWRL) is a W3C Member Submission9 with
   well-defined semantics having a logical expressivity that exceeds SPARQL.
5
  SPIN - Overview and Motivation https://www.w3.org/Submission/2011/
  SUBM-spin-overview-20110222/
6
  http://spinrdf.org/
7
  Shape Expressions 1.0 Definition: https://www.w3.org/Submission/shex-defn/
8
  https://www.w3.org/2014/data-shapes/wiki/Main_Page
9
  https://www.w3.org/Submission/SWRL/
                                                       Rules and RDF Streams        5


[[56]] G r a p h P a t t e r n N o t T r i p l e s ::=
     OptionalGraphPattern | GroupOrUnionGraphPattern |
     MinusGraphPattern | GraphGraphPattern |
     RevisionGraphPattern | WindowsGraphPattern | ServiceGraphPattern
[[60 a ]] R e v i s i o n G r a p h P a t t e r n ::=
     ' REVISION ' ( Var | Revision | String ) G r o u p G r a p h P a t t e r n
[[60 b ]] W i n d o w s G r a p h P a t t e r n ::=
     ' WINDOWS ' WindowRef G r o u p G r a p h P a t t e r n
[[60 c ]] WindowRef ::= ( 'R ' Integer ? '/ ' ) ? WindowRe lation
[[60 d ]] Wi ndowRel ation ::=
     Revision ( '/ ' ( Revision | XPathDuration )
                          ( '/ ' ( XPathDuration | Integer ) )? )?
[[60 e ]] Revision ::= UUID | / HEAD (~[0 -9]+)?/ | XPathDateTime | Integer

[[111 b ]] NullOperator ::= ' RAND ' | 'NOW ' | ' UUID ' | ' STRUUID ' | ' THEN '


                         Fig. 1. SPARQL grammar extension


   Due to the coverage of e.g. relations and functions with arbitrary arity, the
   syntax is not close to RDF triples.
Rule Interchange Format [21] (RIF) is a W3C Recommendation. It includes
   a standard mapping from RDF triples to RIF frames10 , allowing rules to be
   applied to RDF triples. This approach requires recasting RDF into RIF,
   violating the requirement to remain close to RDF. Also it is not clear if RIF
   is applicable to RDF datasets and Graph Stores, as the mapping of RDF
   triples to RIF frames does not address this form.

    Each of the existing languages described above has drawbacks that make it
unsatisfactory relative to the evaluation criteria of the business case. Instead,
we consider a minor extension to SPARQL and the Graph Store protocol. Fig-
ure 1 describes the grammar extensions to support elementary dataset revi-
sions, dataset interval revisions, and repeated interval datasets (aka windows of
streams).
    Figures 3 and 4 illustrates a query which combines static base data with a
temporal clause - in this case in the “validity time” domain, with a clause for
which the dataset constitutes a stream.
    Figure 2 depicts how the generated solution propagation graph is amenable
to a static temporality analysis, based upon which the process control is tailored
to the dataset form.
    In Figure 3, we show a query supporting the “connected patient” scenario
[22]; the patient is monitored with physiology (including heart rate) and activity
sensors. The objective is to notify a designated responder with an SMS message
when there is a combination of sensor readings suggesting an abnormally elevated
heart rate that is not explained by vigorous physical activity. A sequence of
windows of the heart rate and activity sensors are defined by strings of the form
'?WINDOWS=R/HEAD/PT05M/PT01M', which define a sequence of window functions
that are applied to the stream. Following ISO-8601 [23], the text R/ means a
10
     RIF-RDF       Compatibility         https://www.w3.org/TR/rif-rdf-owl/#RDF_
     Compatibility
6        James Anderson et al.

                                                                 construct
                                                ?obsID rdf:type fhir:Observation ?elementID
                                           ?obsID fhir:Observation.subject ?patient ?elementID
                                   ?obsID fhir:Observation.effectiveDateTime ?heartRate_time ?elementID
                                    ?obsID fhir:Observation.code lr:AbnormallyHighHeartRate ?elementID
                                       ?obsID fhir:Observation.valueCodeableConcept ??1 ?elementID
                                              ??1 fhir:CodeableConcept.text "yes" ?elementID
                                              ?elementID prov:generatedAtTime ?generatedTime
                                   ?elementID lr:stream 



                                              filter ( > ?heartRate_avgST ?heartRate_threshold )




                                                           service ?service_activity




                      select (( ?maxEnergyExp ( max ?energyExp ) ))              service ?service_heartRate_ST




                                                                                                                                                        extend ?generatedTime :
                                       join      select (( ?heartRate_avgST ( avg ?heartRate ) ) ( ?heartRate_time ( max ?heartRateTime ) ))                    ( now )




                                                            bgp                                                                                              extend ?elementID :
                            graph ?g                                                         join
                                          ?g prov:generatedAtTime ?generated_time                                                                                  ( uuid )



                                 bgp
          ??2 rdf:type 
              ??2 fhir:Observation.code sct:251833007                                              bgp                                                         extend ?obsID :
                                                                          graph ?g
              ??2 fhir:Observation.valueQuantity ??3                                   ?g prov:generatedAtTime ?gen                                                ( uuid )
                 ??3 fhir:Quantity.value ?energyExp
                     ??3 fhir:Quantity.unit "kJ"



                                                                            bgp
                                                       ??5 rdf:type 
                                                   ??5 fhir:Observation.effectiveDateTime ?heartRateTime
                                                ??5 fhir:Observation.code                                                          join
                                                           ??5 fhir:Observation.valueQuantity ??6
                                                             ??6 fhir:Quantity.value ?heartRate
                                                                ??6 fhir:Quantity.unit "bpm"



                                                                                                    filter (( && ( >= ?age ?ageMin ) ( < ?age ?ageMax ) )                                                 join
                                                                                                      && ( && ( >= ?age ?ageMin ) ( < ?age ?ageMax ) ) )



                                                                                                                                                                                                          bgp
                                                                                                                                                                                       ?patient lr:isConnectedTo ?heartRateSensor
                                                                              extend ?heartRate_threshold :                                                                         ?activitySensor lr:hasStreamID ?activityStreamID      extend ?patient :
                                         ( * ?heartRate_base ( + "1"^^ ?relHeartRate_delta_threshold ) )                                     ?activitySensor rdf:type lr:ActivitySensor           
                                                                                                                                                                                       ?patient lr:isConnectedTo ?activitySensor
                                                                                                                                                                                   ?heartRateSensor lr:hasStreamID ?heartRateStreamID
                                                                                                                                                                                      ?heartRateSensor rdf:type lr:HeartRateSensor



                                                                           extend ?relHeartRate_delta_threshold :
                                                                                                                                                                                                                                              table unit
                                              ( * ( + "1"^^ ?patient_heartRate_std ) ?activityFactor )




                                                                                             join




                              bind (?activityLevel ?energyExpMin ?energyExpMax ?activityFactor) : (3 4)                   join




                                                                 bind (?ageLevel ?ageMin ?ageMax ?ageFactor) : (3 4)                  service 




                                                                                                                                                     extend ?age :
                                                                                                                                           ( - ( year ( now ) ) ?birthdate )




                                                                                                                                                            join



                                                                                                                       bgp
                                                                                                                                                  select (( ?heartRate_base ( avg ?heartRate ) ) ( ?heartRate_std ( std ?heartRate ) ))
                                                                                                    ?patient fhir:Patient.birthDate ?birthdate



                                                                                                                                          filter ( >= ?heartRate_time ( - ( now ) "P30D"^^ ) )



                                                                                                                                                                                           bgp
                                                                                                                                                                     ?heartRateObservation rdf:type fhir:Observation
                                                                                                                                                                 ?heartRateObservation fhir:Observation.subject ?patient
                                                                                                                                                        ?heartRateObservation fhir:Observation.effectiveDateTime ?heartRate_time
                                                                                                                                                      ?heartRateObservation fhir:Observation.code 
                                                                                                                                                                ?heartRateObservation fhir:Observation.valueQuantity ??4
                                                                                                                                                                           ??4 fhir:Quantity.value ?heartRate
                                                                                                                                                                              ??4 fhir:Quantity.unit "bpm"




                                                                         Fig. 2. Data Propagation Graph



recurring sequence of temporal intervals. The text HEAD/ is an indexical telling
the processor, a data concentrator component, to start the windowing operation
as soon as possible. The text PT05M/ specifies the duration of the temporal
interval of the time-based window function, while PT01M/ specifies the duration
of the interval between start points of adjacent window functions. The query,
written in the syntax of the proposed SPARQL extension, constructs the content
of the SMS message.
    The approach illustrated in Figures 3, 4 relies on certain properties of the
RDF stream that do not hold in general, but hold within the RDF Distinct
Time-Series profile11 In particular, the timestamps of the stream elements are of
datatype xsd:dateTimeStamp, and no two elements in the stream have the same
timestamp, giving a total order to the stream elements. This permits a unique
interpretation of HEAD as the most recent window of the stream.
    For simplicity, in the concatenation above, the IRI of the RDF stream is as-
sumed to not already have a query string. It is straight-forward to accommodate
this possibility in the query.

11
     RDF     Distinct  Time-Series   profile  http://streamreasoning.github.
     io/RSP-QL/Abstract%20Syntax%20and%20Semantics%20Document/
     #distinct-time-series-profile
                                                  Rules and RDF Streams         7

5      Implementation Approach
The proposed approach, to express temporal specifications in a SPARQL dataset
designator, has its origin in the unique architecture of Dydra12 , Datagraph’s
SPARQL query service. This comprises two independent components, the RDF
data store and the SPARQL query processor. These communicate through a
limited interface, which allows the processor just to associate a specific dataset
revision with an access transaction as the target and to use statement patterns
to perform count, match and scan operations on this target.
   The following changes will suffice to support streams based on this architec-
ture:
 – Extend the RDF store SPARQL Graph Store Protocol implementation to
   interpret intra-request state to indicate transaction boundaries. This can be
   HTTP chunking boundaries, RDF graph boundaries, or some other marker.
   Based on current import rates, the store should be able to accept 105 state-
   ments per second per stream with the effective rate for sensor readings de-
   pendent on the number of statements per reading.
 – Extend the dataset designators to designate the dataset state which cor-
   responds to some specific set of transactions. The specification will permit
   combinations of temporal and cardinality constraints.
 – Extend the query processor control model from one which performs a single
   reduction of an algebra graph to one in which individual branches can be
   reiterated according to their temporal attributes, while unchanged interme-
   diate results are cached and re-used.
 – Extend the response encoding logic to permit alternative stream packag-
   ing options, among them, repeated http header/content encoding and http
   chunking with and without boundary significance.

6      Conclusion and Future Work
We have presented the grammar for a SPARQL extension and described a related
Graph Store protocol extension that enables a sliding window operation on an
RDF stream to be interpreted as a dynamic dataset. We illustrated the usage of
this extension to describe a continuous query, with rules given in the SPARQL
CONSTRUCT/WHERE form, on an RDF Stream in the distinct time-series
profile with a sample query and a data propagation diagram.
    Future work includes generalizing the SPARQL and Graph Store protocol ex-
tensions to cover a larger class of RDF streams, and demonstrating the feasibility
and performance of the extension with a prototype implementation.

Acknowledgement
This work has been partially supported by the “InnoProfile-Transfer Corporate
Smart Content” project funded by the German Federal Ministry of Education
12
     http://dydra.com
8       James Anderson et al.

and Research (BMBF) and the BMBF Innovation Initiative for the New German
Länder - Entrepreneurial Regions. The authors wish to thank Davide Sottara of
Arizona State University for assistance developing the connected patient query,
and the reviewers for helpful suggestions.


References

 1. Cyganiak, R., Wood, D., Lanthaler, M.: RDF 1.1 concepts and abstract syntax
 2. Norton, S.:             Internet of things market to reach 1.7 tril-
    lion     by    2020:      IDC.            http://blogs.wsj.com/cio/2015/06/02/
    internet-of-things-market-to-reach-1-7-trillion-by-2020-idc/                    (June
    2015)
 3. AT&T, Inc.:          mobilizing your world: AT&T inc. 2015 annual re-
    port. http://www.att.com/Investor/ATT_Annual/2015/downloads/att_ar2015_
    completeannualreport.pdf (2015)
 4. Deutsche Telekom: Answers for the digital future: The 2015 financial year. http:
    //www.telekom.com/static/-/298764/9/160225-q4-allinone-si (2015)
 5. Solbrig, H.R., Prudhommeaux, E., Sharma, D.K., Chute, C.G., Jiang, G.: Fea-
    sibility of modeling hl7 fhir profiles using rdf shape expressions language. In:
    SWAT4LS 2015; Semantic Web Applications and Tools for Life Sciences. Volume
    1296 of CEUR. CEUR-WS.org/Vol-1546/ 208–209
 6. Rinne, M., Nuutila, E., Trm, S.: INSTANS: High-performance event processing
    with standard RDF and SPARQL. In: 11th International Semantic Web Conference
    ISWC 2012, CEUR-WS.org/Vol-914/ 101
 7. Rinne, M., Abdullah, H., Törmä, S., Nuutila, E.: Processing heterogeneous RDF
    events with standing sparql update rules. In: On the Move to Meaningful Internet
    Systems: OTM 2012. Springer (2012) 797–806
 8. Rinne, M., Törmä, S., Nuutila, E.: Sparql-based applications for rdf-encoded sensor
    data. In: Semantic Sensor Networks. Volume 904. CEUR-WS.org/Vol-904/ (2012)
    81–96
 9. Gutierrez, C., Hurtado, C., Vaisman, A.: Temporal RDF. In: European Semantic
    Web Conference, Springer (2005) 93–107
10. Sande, M., Colpaert, P., Verborgh, R., Coppens, S., Mannens, E., de Walle, R.:
    R&Wbase: git for triples. In: Linked Data on the Web Workshop. (2013)
11. Graube, M., Hensel, S., Urbas, L.: R43ples: Revisions for triples. Proc. of LDQ
    (2014)
12. Barbieri, D.F., Braga, D., Ceri, S., Valle, E.D., Grossniklaus, M.: Querying RDF
    streams with C-SPARQL. ACM SIGMOD Record 39(1) (2010) 20–26
13. Le-Phuoc, D., Dao-Tran, M., Parreira, J.X., Hauswirth, M.: A native and adaptive
    approach for unified processing of linked streams and linked data. In: International
    Semantic Web Conference, Springer (2011) 370–388
14. Calbimonte, J.P., Corcho, O., Gray, A.J.: Enabling ontology-based access to
    streaming data sources. In: International Semantic Web Conference, Springer
    (2010) 96–111
15. Anicic, D., Fodor, P., Rudolph, S., Stojanovic, N.: Ep-sparql: a unified language for
    event processing and stream reasoning. In: Proceedings of the 20th international
    conference on World wide web, ACM (2011) 635–644
                                                     Rules and RDF Streams          9

16. Rinne, M., Nuutila, E., Törmä, S.: Instans: High-performance event processing
    with standard RDF and sparql. In: Proceedings of the 2012th International Con-
    ference on Posters & Demonstrations Track-Volume 914, CEUR-WS. org (2012)
    101–104
17. Meimaris, M., Papastefanatos, G., Viglas, S., Stavrakas, Y., Pateritsas, C., Anag-
    nostopoulos, I.: A query language for multi-version data web archives. arXiv
    preprint arXiv:1504.01891 (2015)
18. Anderson, J., Bendiken, A.: Transaction-time queries in dydra. In: Joint Pro-
    ceedings of the 2nd Workshop on Managing the Evolution and Preservation of the
    Data Web (MEPDaW 2016) and the 3rd Workshop on Linked Data Quality (LDQ
    2016) co-located with 13th European Semantic Web Conference (ESWC 2016):
    MEPDaW-LDQ 2016. Volume 1585. CEUR-WS.org/Vol-1585/ (2016) 11–19
19. Gayo, J.E.L., Prud’hommeaux, E., Solbrig, H.R., Rodrguez, J.M..: Validating and
    describing linked data portals using RDF shape expressions. In: LDQ 2014: Linked
    Data Quality, CEUR-WS.org/Vol-1215/
20. Athan, T., Boley, H.: The MYNG 1.01 suite for deliberation RuleML 1.01: Taming
    the language lattice. In: Rule Challenge @ RuleML 2014. Volume 1296 of CEUR.
    CEUR-WS.org/Vol-1296/ 14
21. Boley, H., Hallmark, G., Kifer, M., Paschke, A., Polleres, A., Reynolds, D.: RIF
    core dialect (February 2013) W3C Recommendation. http://www.w3.org/TR/rif-
    core/.
22. Athan, T., Bell, R., Kendall, E., Paschke, A., Sottara, D.: Api4kp metamodel: A
    meta-api for heterogeneous knowledge platforms. In Bassiliades, N., Gottlob, G.,
    Sadri, F., Paschke, A., Roman, D., eds.: Rule Technologies: Foundations, Tools,
    and Applications: 9th International Symposium, RuleML 2015, Berlin, Germany,
    August 2-5, 2015, Proceedings, Cham, Springer International Publishing (2015)
    144–160
23. International Organization for Standardization: Data elements and interchange
    formats – Information interchange – Representation of dates and times, ISO
    8601:2000. (December 2000)
10        James Anderson et al.



Table 1. SPARQL alternatives for revisions and streams. In the data models T is a
temporal literal, R a revision identifier and G a graph name. Temporality indicates
whether temporal properties are specified through variables bound and constrained in
a BGP and/or through a dataset clause.


XX                                   Data                              Process       SPARQL 1.1 application
    XXX Feature
                                     Model            Temporality      Model          compatible  graphs
Proposal XXXX
                                                                      algebra
temporal RDF            [9]    ((S × P × O) × T )     BGP            reduction no                yes
TriplePattern ::= (Subject Predicate Object ) : [T]
                                                                      rewrite,
                                                                      generate
                                                                      dataset,                   ”virtual”,
R&W Base               [10]    ((S × P × O) × R)      BGP             delegate       yes         graph ≡ R
Datatset ::= FROM VersionIri
                                                                      rewrite,
                                                                      generate
                                                                      dataset,
                                                                       algebra                      no,
R34ples                [11]    ((S × P × O) × R)      Dataset        reduction yes               graph ≡ R
Datatset ::= FROM Graph REVISION VersionIdentifier
                                                                      algebra                       no,
C-SPARQL               [12]    ((S × P × O) × T )     BGP            reduction no                graph ≡ T
 Datatset ::= FROM Iri Window?
   Window ::= ( RANGE Number TimeUnit ( ( STEP Number TimeUnit ) | TUMBLING ) )
              | ( TRIPLES Number )
                                                                      rewrite
                            ((S × P × O) × T ) ∗ +                    (SQL),
CQELS                  [13] ((S × P × O) ∗ ×T ) Dataset               delegate       no          yes
 GraphPatternNotTriples ::= ... | StreamGraphPattern
     StreamGraphPattern ::= RANGE [ Window ] VarOrIRIReference TriplesTemplate
                 Window ::= ( RANGE Number TimeUnit ( SLIDE Number TimeUnit )? )
                            | ( TRIPLES Number ) | NOW | ALL
                                                                       rewrite
                                                                      (DSMS),
SPARQL Stream [14]             ((S × P × O) × T )     Dataset         delegate       no          yes
 Datatset ::= FROM STREAM Iri Window ?
   Window ::= FROM NOW ( - Number TimeUnit )?
              TO NOW ( - Number TimeUnit )?
              ( STEP Number TimeUnit ) ?
                                                                      rewrite
                                                                    (PROLOG),
EP-SPARQL              [15] ((S × P × O) × T × T ) BGP                delegate yes               yes
 GraphPatternNotTriples ::= ... | SeqGraphPattern | EqualsGraphPattern
                            | OptionalSeqGraphPattern | OptionalEqualsGraphPattern
           NullOperator ::= ... | getDURATION | getENDTIME | getSTARTTIME
INSTANS                [16]    (S × P × O × G)        BGP              RETE          no          yes
 Verb ::= ... | :hasWindow
                                                      federated       rewrite,
DIACHRON               [17]    ((S × P × O) × R)      metadata        delegate       no          no
  SourceClause ::= ... | ( FROM DATASET URI ( AT VERSION URI )? ) | ( FROM CHANGES URI ( BEFORE VERSION URI )? )
                   | ( AFTER VERSION URI ) | ( BETWEEN VERSIONS URI URI )
 SourcePattern ::= ... | (DATASET URI ( AT VERSION URI )? ) | ( CHANGES URI ( BEFORE VERSION URI )? )
                   | ( AFTER VERSION URI ) | ( BETWEEN VERSIONS URI URI )

                                                       BGP            algebra
Dydra                  [18]    (S × P × O × G)        Dataset        reduction yes               yes
 RevisionGraphPattern ::= ’REVISION’ ( Var | Revision | String ) GroupGraphPattern
  WindowsGraphPattern ::= ’WINDOWS’ WindowRef GroupGraphPattern
                                                                    Rules and RDF Streams                  11




PREFIX    fhir : < http :// hl7 / org / fhir / >
PREFIX    obs : < http :// hl7 / org / fhir / Observation . >
PREFIX    sct : < http :// snomed . info / id / >
PREFIX    prov : < http :// www . w3 . org / ns / prov # >
PREFIX    lr : < http :// localhost / local - records # >

CONSTRUCT {
  GRAPH ? elementID {
     ? obsID a < http :// hl7 / org / fhir / Observation >;
            obs : subject ? patient ;
            obs : e f f e c t i v e D a t e T i m e ? heartR ate_tim e ;
            obs : code < http :// localhost / local - records # AbnormallyHighHeartRate > ;
            obs : v a l u e C o d e a b l e C o n c e p t [ fhir : Co de ab l eC on ce p t . text 'yes ' ] .
     }
  ? elementID prov : g en e ra te d At Ti me ? generatedTime ;
     # this would likely be provided with the request as a protocol parameter
     lr : stream < urn : uuid : b4f166f6 -387 c -11 e6 -83 de -180373 ae0412 > .
}
# determine heart rate exceeding some limit
# based on history , activity and institutional factors
WHERE {
  # this would likely be provided with the request as a protocol parameter
  BIND ( < patient / @pat2 > AS ? patient )
  ? patient lr : isConnectedTo ? h ea rt Ra t eS en s or .
  ? he ar t Ra te Se n so r lr : hasStreamID ? h e a r t R a t e S t r e a m I D ;
                                  a lr : He a rt Ra te S en so r .
  ? patient lr : isConnectedTo ? activit ySensor .
  ? ac tivitySe nsor lr : hasStreamID ? a c t i v i t y S t r e a m I D ;
                                  a lr : A ctivityS ensor .

  BIND ( IRI ( CONCAT (? activityStreamID , '? WINDOWS = R / HEAD / PT05M / PT01M '))
         AS ? s e r v i c e _ a c t i v i t y )
  SERVICE ? s e r v i c e _ a c t i v i t y {
    SELECT ( MAX (? energyExp ) AS ? maxEnergyExp )
    WHERE {
      ?g     prov : ge ne ra t ed At Ti m e ? gen erated_t ime .
      GRAPH ? g {
         [] a < http :// hl7 . org / fhir / Observation > ;
             obs : code < http :// snomed . info / id /251833007 > ; # energy expenditure
             obs : valueQuantity [ fhir : Quantity . value ? energyExp ;
                                                fhir : Quantity . unit 'kJ '   ] .
  } } }

  # establish heart rate baseline through valid - time aggregation
  SERVICE < http :// localhost / patientRecords > {
    { SELECT ( AVG (? heartRate ) AS ? heartR ate_bas e )
                   ( STD (? heartRate ) AS ? heartRate_std )
       WHERE {
         ? h e a r t R a t e O b s e r v a t i o n a fhir : Observation ;
             obs : subject ? patient ;
             obs : e f f e c t i v e D a t e T i m e ? heartR ate_tim e ;
             obs : code < http :// snomed . info / sct :36407505 > ; # heart rate
             obs : valueQuantity [ fhir : Quantity . value ? heartRate ;
                                                   fhir : Quantity . unit 'bpm ' ] .
         FILTER ( ? heart Rate_ti me >= ( now () - ' P30D '^^ xsd : da y Ti me Du r at io n ) )
    } }
    ? patient fhir : Patient . birthDate ? birthdate .
    BIND ( ( YEAR ( NOW ()) - ? birthdate ) AS ? age )
  }
Fig. 3. Combined base-data, temporal and streaming query. See Figure 4 for the con-
tinuation.
12          James Anderson et al.




     VALUES ( ? ageLevel ? ageMin ? ageMax ? ageFactor ) {
       ( lr : elderly 70 120 0.7 )
       ( lr : midaged 30 70 1.0 )
       ( lr : young 0 30 1.3 )
     }
     FILTER ( ? age >= ? ageMin && ? age < ? ageMax )
     VALUES (? activityLevel ? energyExpMin ? energyExpMax ? activit yFactor ) {
       ( lr : high 25 100 2.0)
       ( lr : medium 11 25 1.0)
       ( lr : low 0 10 0.7 )
     }
     FILTER ( ? maxEnergyExp >= ? energyExpMin * ? ageFactor &&
                       ? maxEnergyExp < ? energyExpMax * ? ageFactor )
     BIND ( ( (1 + ? p a t i e n t _ h e a r t R a t e _ s t d ) * ? ac tivityF actor )
              AS ? r e l H e a r t R a t e _ d e l t a _ t h r e s h o l d )
     BIND ( (? heartR ate_bas e * (1 + ? r e l H e a r t R a t e _ d e l t a _ t h r e s h o l d ))
              AS ? h e a r t R a t e _ t h r e s h o l d )
     # determine current pulse averages from sensors
     BIND ( IRI ( CONCAT ( STR (? h e a r t R a t e S t r e a m I D ) , '? WINDOWS = R / HEAD / PT05M / PT30S '))
              AS ? s e r v i c e _ h e a r t R a t e _ S T )

     SERVICE ? s e r v i c e _ h e a r t R a t e _ S T
     { SELECT ( AVG (? heartRate ) AS ? h ea r tR at e_ a vg ST )
               # if the transaction timestamp , instead
               # ( THEN () AS ? hear tRate_t ime )
               ( MAX (? heartRateTime ) AS ? hea rtRate_ time )
       WHERE {
         ? g prov : ge ne ra t ed At Ti m e ? gen .
         { GRAPH ? g {
            [] a < http :// hl7 . org / fhir / Observation > ;
               obs : e f f e c t i v e D a t e T i m e ? heartRateTime ;
               obs : code < http :// snomed . info / sct :36407505 > ; # heart rate
               obs : valueQuantity [ fhir : Quantity . value ? heartRate ;
                                                       fhir : Quantity . unit 'bpm ' ] .
     } } } }

     # test limit
     FILTER ( ? h ea r tR at e_ a vg ST > ? h e a r t R a t e _ t h r e s h o l d )
     # iff it gets this far , it will generate a result observation
     BIND ( UUID () AS ? obsID )
     BIND ( UUID () AS ? elementID )
     BIND ( NOW () AS ? generatedTime )
}
           Fig. 4. Combined base-data, temporal and streaming query, continued