<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Backward/Forward with Marking for Update Streams</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Moritz Illich</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Birte Glimm</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ulm University</institution>
          ,
          <addr-line>James-Franck-Ring, 89081 Ulm</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Extending a set of facts with every fact that can be derived based on a set of rules is called materialization. Incremental approaches, like Delete/Rederive (DRed) and Backward/Forward (B/F), allow for eficient adaptations of materialized datasets whenever the original set of facts changes due to an update. To efectively deal with streams of updates, we previously extended DRed with marking, where we directly take a look at the next available update in the stream and utilize this insight to prevent repeated rule applications. In this work, we apply this idea on B/F by using marks to indicate and find facts that are deleted by the next update, which enables us to determine facts that need to be checked for some alternative derivation without considering rules for that. An evaluation with both synthetic and real test data demonstrates the marking approach's potential to reduce processing time compared to classical B/F.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;materialization</kwd>
        <kwd>incremental reasoning</kwd>
        <kwd>stream processing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Basics and Preliminaries</title>
      <p>
        As done for DRed with Marking [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], we formally define Datalog [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] based on countable, disjoint sets of
predicates, constants, and variables. A term is a constant or a variable. An atom has the form (1, . . . , ),
where  is a -ary predicate and each , 1 ≤  ≤ , is a term. If an atom does not contain variables,
it is ground. A fact is a ground atom and a dataset is a finite set of facts. A Datalog rule  is a logical
implication of the form 1, . . . ,  →  where 1, . . . ,  are called body atoms, and  is a head
atom. We use body() and head() to denote the set of body atoms and the head atom of , respectively.
A rule is safe if variables that appear in the head also appear in a body atom. A Datalog program is a
ifnite set of safe rules. Predicates that occur in the head of a rule are called intensional (IDB) predicates;
all other predicates are extensional (EDB).
      </p>
      <p>A substitution  is a partial mapping from variables to constants. For  a term, an atom, a rule, or a
set of rules,  is the result of replacing each occurrence of a variable  in  on which  is defined with
 (). If  is a rule and  is a substitution mapping all variables of  to constants, then  is an instance of
. We say that a set of facts  instantiates a rule  if there exists a substitution  such that body() = .
Given a Datalog program  and a dataset , we define  (, ) = { |  ∈ ,  ⊆ , body() = }
as the set of all rule instances that can be created by instantiating rules in  with subsets of . A
fact is called derivable if it appears as head in a rule instance of  (, ). For a program  , we define
 () = ⋃︀∈ (,){head()}.</p>
      <sec id="sec-2-1">
        <title>2.1. Materialization Maintenance</title>
        <p>
          Let  be a finite dataset of explicit facts. Then, let 0 = ; for each  ≥ 1, let  = − 1 ∪ (− 1), and let
 = +1 for some  ≥ 1. The set  is the materialization of  w.r.t. , denoted as mat(, ). Let −
and + be finite datasets with − ⊆ ,  ∩ + = ∅, and + ∪ − ̸= ∅. The tuple  = (− , +) is
called an update for , where − denotes the facts explicitly deleted from , and + the facts explicitly
added to , respectively. Applying the update  on  leads to the updated dataset ′ = ( ∖ − ) ∪ +.
We allow only EDB predicates in updates. Accordingly, a fact is implicit if it has an IDB predicate, and
explicit if it has an EDB predicate. This is w.l.o.g. as we can replace a -ary IDB predicate  that is to be
used in an update by a new -ary predicate ′ and add rules of the form (1, . . . , ) → ′(1, . . . , )
such that  becomes an EDB predicate (see, e.g., [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]).
        </p>
        <p>Materialization maintenance is the task of computing the updated materialization mat(, (0 ∖ − ) ∪
+) for a given materialization mat(, 0) and an update  = (− , +). Let ^ = ⟨1, 2, . . .⟩, with
|^ | ≥ 1, denote a stream of updates for a dataset 0, where for each  = (− , +) ∈ ^ , we have
− ⊆ − 1 and − 1 ∩ + = ∅, resulting in  = (− 1 ∖ − ) ∪ +. For a stream of updates ^ ,
materialization maintenance leads to a stream of materialized datasets ⟨mat(, 1), mat(, 2), ...⟩
where each  corresponds to an  ∈ ^ .</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Related Work</title>
      <p>
        As we combine the main idea of Delete/Rederive with Marking [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] with Backward/Forward [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], these
algorithms serve as main references for our work and are described in more detail below. In addition,
other improvements to DRed and B/F as well as to the well-known counting approach [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] are presented
by Motik et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], while Hu et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] describe combined algorithms. Related to the processing of
streams, Terdjimi et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] present a tag-based approach that allows for fast re-insertions of deleted
facts without repeating rule applications, while Ren and Pan [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] describe a truth maintenance system
to handle update streams for ℰ ℒ++ ontologies. DynamiTE [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] performs materialization in parallel
for RDF streams and IMaRS [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ] utilizes window-based expiration times for eficient incremental
adaptations. Unlike Delete/Rederive with Marking, however, all of these approaches only consider one
update at a time.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Delete/Rederive with Marking</title>
        <p>
          DRed with Marking [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] extends the classical Delete/Rederive algorithm [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] in order to more eficiently
deal with streams of updates, where changes to the materialized dataset might appear faster than they
can be processed. The general procedure is still as in the classical algorithm, consisting of the three
sequential overdeletion, rederivation, and insertion phases: Given a materialized dataset  = mat(, )
and an update  = (− , +), the overdeletion phase removes every fact from  that occurs in − or
that can be derived by means of such a deleted fact. Even though this procedure ensures that we delete
every fact that is not derivable anymore in the updated materialization, it might also falsely remove
facts which still have some alternative derivation that is not afected by any deletion. This problem is
solved in the subsequent rederivation phase, where we re-add all facts that can still be derived based on
the remaining facts. After that, we insert + and compute every new derivable fact in the insertion
phase to obtain a correctly updated materialization.
        </p>
        <p>Classically, DRed only focuses on one update during its processing. The expanded algorithm, on the
other hand, also tries to integrate the next update of the stream into the current processing as soon as it
is available. This is achieved by marking facts in the dataset that are changed by the next update. In
particular, a fact is marked negatively if it is deleted by the next update, whereas a currently deleted
fact is marked positively if it is re-added. When such a marked fact is involved in some rule application,
then we also mark the derived implicit fact if certain conditions hold. This way, we are able to directly
determine some of the fact changes related to the next update and, thus, reduce the number of rule
applications that would usually be necessary to process the next update. Concretely, we avoid cases
where a rule instance has to be considered again for the next update as illustrated in the following
example:
Example 1. Assume we have a rule 1(), 2() → () and two consecutive updates  = (∅, {2()})
and  = ({1()}, ∅), which we sequentially apply on a dataset  = {1()}. When we process , we
directly take a look at the next update  and (negatively) mark the fact 1() in , since it will be deleted
by . After adding 2() to , we can apply the rule to derive (), which will be marked too, because its
derivation depends on 1() that is deleted by the next update, as indicated by the mark. Once we finish
the processing of  and move on to , we can directly see that () is afected by some deletion due to its
mark, without the need to apply the rule again.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Backward/Forward</title>
        <p>
          The Backward/Forward algorithm [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] was originally introduced as a way to deal with the inherent
ineficiency of DRed where falsely deleted facts have to be rederived. While the computation of new
derivable facts due to insertions is done as in DRed, the B/F algorithm directly checks if a deleted fact has
an alternative derivation, which does not include any deleted facts, to prevent a potential (over)deletion
along with the needed rederivation of its consequences.
        </p>
        <p>To find an alternative derivation for some fact  , first, backward chaining is applied to determine
facts that would be needed to derive  . In detail, we look for rule instances that have  as head and
then recursively check if we can “prove” the facts in the rule instance’s body, where a fact is considered
to be “proven” if it either is explicit (and not deleted) or can be derived entirely by proven facts. To
ensure termination in the presence of recursive derivation cycles, each fact is only checked once during
backward chaining. For the proving of implicit facts, we perform forward chaining by applying rules
on already proven facts. If a fact has been checked during backward chaining and was also derived
during forward chaining, then it is proven and we keep it, otherwise it has to be deleted.
Example 2. Assume we have three rules 1(), 2() → (), 3() → (), and () → (), and an
update  = ({1()}, ∅), which we apply on a materialized dataset  = {1(), 2(), 3(), (), ()}.
Since () can be derived by the deleted 1() based on the first rule, we check if it has an alternative
derivation. Using backward chaining, we match () to the head of the second rule, which then tells us to
check 3() in the rule’s body next. Since 3() is explicit (as it cannot be derived by any rule), it is directly
“proven” and, hence, can be applied for forward chaining with the second rule to derive and, thus, “prove”
the previously checked () too. Accordingly, () (and its consequence ()) must not be deleted.</p>
        <p>The reason why this combination of backward and forward chaining often performs better than
DRed is that instead of traversing (and deleting) the facts which can be derived based on a checked
fact ({()} in Example 2), B/F goes through the facts that potentially support the derivation of the
checked fact ({3()} in Example 2), where the latter set of facts is usually smaller. Nevertheless, there
still exist cases where B/F can be slower than DRed, for example, when alternative derivations and,
thus, rederivations are rare.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. B/F with Marking</title>
      <p>
        The general idea of the marking approach is that we first mark explicit facts in the dataset which are
changed by the next update, and then pass these marks on to implicit facts during rule applications
in order to determine further changes that are relevant for the next update. The evaluation of DRed
with Marking [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] showed that positive marks, i.e., cases where a currently deleted fact is re-added by
the next update, are quite rare and, hence, do not notably contribute to the performance gain of the
marking approach. Therefore, we only focus on negative marks for B/F, i.e., situations where facts are
deleted by the next update.
      </p>
      <p>An implicit fact is afected by some deletion if there exists a rule instance that has the implicit fact as
head and contains a deleted fact in its body. Accordingly, if we have a rule instance that contains at
least one marked fact in its body, we may simply mark the head too, as already illustrated in Example 1.
In DRed, the overdeletion phase removes every fact that is afected by some deletion, which is why we
can also use implicit facts there to propagate marks to other derived facts. In contrast to this, deletions
in B/F are accurate, in the sense that we only remove a fact from the dataset when we are sure that it
cannot be derived anymore. Accordingly, a marked implicit fact should only be used to mark another
fact if we can guarantee that the former cannot be derived anymore due to the next update. As this
knowledge can generally not be obtained before actually processing the next update, we may only use
marked explicit facts, for which we definitely know that they will be deleted, to mark other facts in B/F.</p>
      <p>The computation of marked implicit facts can be done for B/F both during the forward chaining
process where we try to prove that a fact is still derivable, and when we determine new derivable facts
based on insertions. By looking for marked implicit facts once the processing of the current update is
completed, we can directly obtain some facts that have to be checked for alternative derivations when
the next update is processed, without the need to first search for appropriate rule instances that tell us
that those facts are afected by some deletion.</p>
      <p>With the inclusion of the next update in our processing, one might think that we are also able to
re-use some of the “proven” facts, and thus avoid repeated backward/forward chaining, by excluding
marked proved facts for which we know that they are not derivable in the next update. However, this
does generally not work, since markings do usually not cover every fact that has to be deleted for the
next update: on the one hand, because we do not allow implicit facts to pass on marks, and on the other
hand, since updates might be introduced with a delay, so that some proven facts might not be marked
even though they are afected by a deletion.</p>
      <sec id="sec-4-1">
        <title>4.1. Algorithm</title>
        <p>The formal description of Backward/Forward extended with marking is presented in Algorithm 1.
Here, we use three functions getNext(^ ), which returns the next update in the stream ^ if one is
available, getLast(), which returns the most recently added fact from the set , and getMarked(),
which returns the set of marked facts from . Additionally, we assume two procedures mark() and
ummark(), which add and remove marks for the facts in the given set , respectively. The input of
Algorithm 1 consists of a Datalog program  , a (possibly empty) materialized dataset  = mat(, 0),
and a stream of updates ^ = ⟨1 = (1− , 1+), 2 = (2− , 2+), ...⟩. The algorithm’s output is a
stream of materialized datasets ⟨mat(, 1), mat(, 2), ...⟩ with  = (− 1 ∖ − ) ∪ + for  ≥ 1,
where each dataset refers to an update in ^ . Note that we only work with one program  , i.e., we do
not deal with changes to the set of rules. The algorithm’s correctness is shown in Appendix A.</p>
        <p>One diference to the original B/F algorithm is that we conduct our computations in a big loop to
allow the continuous processing of a whole stream of updates. At the beginning of each iteration, we
check if there is a current update 1 to be processed, as well as a next update 2 that occurs directly
after 1 in the stream. As in DRed with Marking, we consider at most two updates at the same time
to keep the marking computations simple. In general, an immediate access to updates is not always
possible due to delays in the stream. If we do not have a current update 1, we wait until one is available
(cf. line 5).</p>
        <p>Algorithm 1 B/F with Marking
if head() ∈  then</p>
        <p>←  ∪ {head()};  ←  ∪ {head()};  ←
else if head() ∈  then  ←  ∪ {head()}
else  ←  ∪ {head()}
 ∖ {head()}
◁ b) Backward chaining with waiting facts
else if  ← getLast( ) ̸= null then
if  is explicit and  ̸∈ 1− , or  ∈  then</p>
        <p>←  ∪ { };  ←  ∪ { };  ←  ∖ { }
else if ∃ ∈  (, ) : body() ∩  = ∅ and body() ∖ ( ∪ ) ̸= ∅ and head() = 
then  ←  ∪ (body() ∖ ( ∪ ))
else  ←  ∪ { };  ←  ∖ { }
◁ c) Determine facts afected by deletion
else if ∃ ∈  (, ) : body() ∩  ̸= ∅ and head() ̸∈  ∪  then</p>
        <p>←  ∪ {head()}</p>
        <p>For the next update 2, however, we only look once per iteration, before we continue with the
processing of 1 to avoid any stagnation (cf. line 8). When a next update has been found, we directly
use it to mark explicit facts that are in the dataset or added by the current update but also deleted by
the next update (see line 9). To ensure a fast integration of the next update once it is available, we apply
else if ( ∖  ) ̸⊆  then  ←</p>
        <p>∪ ( ∖  )
else  ←
if phase = 2 then
( ∖ ) ∪ 1+; phase ←</p>
        <p>2
if ∃ ∈  (, ) : head() ̸∈  then
if ∃ ∈ body() :  is explicit and marked then mark({head()})
◁ d) Delete facts that cannot be proven
◁ e) Finish deletions and prepare insertions</p>
        <p>◁ INSERTIONS
◁ f) Add new derivable facts
 ←  ∪ {head()}
31: else
32:  ← ∅ ;  ← 2− ;  ← ∅ ;  ←
33: write  to output; 1 ← 2 ; 2 ←
34: until ^ ends and 1 = null
◁ g) Prepare next update processing
getMarked() ∖ 2− ;  ← ∅ ; unmark()</p>
        <p>null; phase ← 1
at most one rule in each iteration as indicated by the existential quantifiers in lines 11, 20, 23, and 28.
Note that despite the usage of the set  (, ) in those lines, we do not have to determine every possible
rule instance at once, but instead can compute them gradually as needed.</p>
        <p>The remaining operations in the algorithm are separated into a deletion and an insertion phase based
on a phase variable (cf. lines 10 and 27), which is updated once the computations for a phase are finished
(cf. lines 26 and 33). Similarly to the original B/F algorithm, we use various sets in the deletion phase to
indicate that a fact is “completely checked” (), “deleted” (), “proven” ( ), “waiting to be completely
checked” ( ), or “delayed” ( ), i.e., proven but not checked yet. The deletion phase can be further
divided into five blocks (a–e) and the insertion phase into two (f–g), as indicated by the comments on
the right-hand side of Algorithm 1. Note that the order of the blocks in the algorithm is based on their
priority and does not necessarily represent their appearance during processing. We use the following
example as starting point to describe the functionality of the diferent sets and blocks:
Example 3. Assume we have four rules 1(), 2() → (), 3() → (), () → (), and
(), 4() → (), and two consecutive updates  = ({1()}, {4()}) and  = ({4()}, ∅), which
we sequentially apply on a materialized dataset  = {1(), 2(), 3(), (), ()}. During the first loop
iteration in Algorithm 1, we assign  to 1 and set  = {1()}, before we assign  to 2 and then
mark the fact 4().</p>
        <p>Since the sets  and  are initially empty, but  is not, we begin with block c), where we determine
facts that are afected by a deletion . For that, we search for rule instances where the body contains some
deleted fact from  and the head has not been checked yet (see line 23), in which case the head is added
to the set of waiting facts  (see line 24). In our example, we use the rule instance 1(), 2() → ()
with 1() ∈  to add () to  .</p>
        <p>Due to () ∈  , we apply block b) in the next iteration, where we process facts that are waiting to
be checked for a deletion-free derivation. For the implicit fact (), we do this by applying backward
chaining based on the rule instance 3() → (), where the body does not contain any deleted fact
from  but still consists of some facts that were not checked yet, so that they may be added to  for
future processing (see lines 20 and 21). In particular, we only add facts to  if they do not already occur
in  ∪  to guarantee termination for recursive derivation cycles. As in the original B/F algorithm, we
perform backward chaining in a depth-first manner by always selecting the most recently added fact
from  (see line 17). Accordingly, we continue in the next iteration with the new fact 3(). Because
3() is explicit and not deleted, it is regarded as “proven” and added to the set  . Thus, we do not
need to process it any further and move it from  to the set of completed facts  (cf. lines 18 and 19).</p>
        <p>With 3() ∈  , the next iteration starts with block a), which is responsible for the forward chaining
segment of B/F, where we derive facts entirely with proven facts from  (cf. line 11). In addition, the
derived fact is marked if the body of the applied rule contains a marked explicit fact (see line 12). Using
the rule 3() → (), we derive and, thus, prove the fact (), which is also added to  as it has
already been checked during backward chaining and, hence, occurs in  (or ), based on lines 13 to 15.
Prioritizing forward over backward chaining facilitates a quick proving of facts which were considered
during backward chaining, so that we can prevent their further processing in block b) by removing
them from  (see line 14). With () in  , we then also derive () based on the rule () → () in
the next iteration, but since () does not occur in  or , it is added to the set of delayed facts  (see
line 16), so that it may directly be proven should it later appear during backward chaining (cf. lines 18
and 19).</p>
        <p>At this point in our example, we computed every derivable fact from  , the set  is empty, we
considered every rule instance that contains a fact from , and every checked fact has been proven,
i.e.,  =  . Therefore, we can only apply block e), where we remove the facts in  from the
materialized dataset  and add the new explicit facts 1+ of the current update 1, resulting in  =
{2(), 3(), (), (), 4()}, before we adapt the phase variable to continue with the insertion phase
(see line 26).</p>
        <p>In the first block f) of the insertion phase, we repeatedly add every fact to the dataset that is not present
yet, but occurs as head in a rule instance (see lines 28 and 30). In our case, we use (), 4() → () to
add () to the dataset. Furthermore, we also mark the derived fact () due to the marked explicit fact
4() in the rule body (cf. line 29). Once we cannot add any new fact, we end the insertion phase in
block g) by writing the now fully updated materialized dataset to the output stream and prepare the
processing of the next update in lines 32 and 33: we empty the sets ,  , and  to reset the proven
facts, initialize  with the explicitly deleted facts of the next update, and use the set of marked implicit
facts as starting point for  . In our example, we get  = {2(), 3(), (), (), 4(), ()} with
 = {4()} and  = {()}.</p>
        <p>Unlike the processing of the first update, the second one now begins with block b) as  is not empty.
More specifically, we can directly prevent rule applications in block c), where we only consider a rule
instance if the head does not already occur in  or  (see line 23). Accordingly, the initialization of 
based on marked facts is how we improve the performance of B/F in the end. Since () ∈  is neither
explicit nor in  , and there is no rule instance without any body fact in  which has () as head, the
fact is moved to  based on line 22.</p>
        <p>Because  and  are empty, and () occurs in  but not in  , we apply block d) next. At this
point, we know that facts which were checked but could not be proven do not possess any alternative
derivation and, hence, have to be deleted, which is why we add them to , so that we may determine
further facts afected by deletions in the following iterations (cf. line 25). Since () cannot be matched
to any body atom in our rules, we cannot perform block c) and instead move on to block e), where the
deleted facts are removed from the dataset, leading to  = {2(), 3(), (), ()}. As  does not
insert new facts, we then finish with block g).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Evaluation</title>
      <p>
        Extending DRed with marking allowed us to reduce the CPU time that is needed to process a stream of
updates by about 25% in average in the conducted evaluation [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. For the B/F algorithm, we expect less
improvement in the performance. The first reason is that marks are only computed based on explicit
facts, which might lead to fewer marked implicit facts and, thus, a smaller reduction of rule applications.
In addition, the main efort of B/F usually lies in the backward and forward chaining of the deletion
phase, whereas determining what facts are afected by a deletion is comparatively simple. To see if the
marking approach can still improve B/F despite those limitations, we conducted the same evaluation as
for DRed with Marking.
      </p>
      <sec id="sec-5-1">
        <title>5.1. Test Data</title>
        <p>The evaluation involves two synthetic and one real test case. In the synthetic ones, we work with a
graph represented by a set of directed edges, where randomly generated updates appropriately add
and delete the same number of edges. In one case, we use a Datalog program trans, which computes
(transitive) paths in the graph, and in the other one a program seq, where predicates are repeatedly
renamed to create simple sequences of rule applications. In particular, trans allows for many alternative
derivations, which is often an advantage for B/F compared to DRed, while facts in seq can only be
derived in one specific way, which is usually less optimal for B/F.</p>
        <p>The real test case is inspired by a task in autonomous driving, where we reason about dynamically
loaded map data. Given a GPS track, we load map data within a radius of 50  around each GPS position
and then generate a stream of updates, which states how the map data changes between the sequential
GPS positions. The map data focuses on specific types of ways, for which Datalog rules are utilized
to compute connections and their transitive relations. The exact rules of each program are listed in
Appendix B.
size
old</p>
        <p>new
old
new
old
new
old
new
insertions
both
marks
ex.</p>
        <p>im.
7.8 1,603 447 11,253 14,447 18,777 19,001
7.4 3,046 594 10,468 13,199 15,373 15,516
7.9 5,146 1,100 10,206 11,856 11,154 11,224
2.9 4,755 682 10,834 13,036 12,560 12,612
-3.4 3,778 360 11,514 14,441 15,060 15,096
-6.0 3,227 213 11,842 15,254 16,518 16,547
-9.1 2,724 161 11,919 16,054 17,780 17,796
-11.4 2,584 126 11,863 16,323 18,169 18,183</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Test Execution and Results</title>
        <p>
          We performed the evaluation by extending the implementation of DRed with Marking, which is available
online1. The tested update streams are created with Java, while Constraint Handling Rules [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] based
on SWI-Prolog2 are used to implement both the original and our extended B/F algorithm. For a more
eficient implementation, we slightly adapted Algorithm 1 to allow more than one rule application in a
loop iteration once the next update 2 has been provided.
        </p>
        <p>During the tests, we measured the time spent by the CPU to process the whole stream, along with
the number of applied rules in the diferent phases of the algorithm. For our extended B/F approach, we
additionally counted the number of marked facts. The following results were obtained on a Windows 11
PC with an AMD Ryzen 7 3700X 3.59 GHz CPU and 16 GB RAM, using SWI-Prolog 9.3.15 with a 4 GB
stack.</p>
        <sec id="sec-5-2-1">
          <title>5.2.1. Synthetic Data Results</title>
          <p>For the synthetic tests, we computed average values from three runs based on diferent random seeds
and five repetitions. Every stream included exactly 50 updates, where the first one added 100 facts
to an empty dataset, for which the materialization was first completed before any other update was
considered. Each test run processed eight update streams, where the update size, i.e., the number of
both added and deleted facts, increased evenly from 10 to 80. The maximum number of nodes in the
graph was set to 20 for trans and to 100 for seq.</p>
          <p>
            Table 1 shows the results for trans, where “old” refers to classical B/F and “new” to B/F with marking,
respectively. Furthermore, “deletions” presents the number of rule applications for line 23 in Algorithm 1,
“backward” for line 20, “forward” for line 11, and “insertions” for line 28. As indicated by the “reduction”
column, the marking approach can both decrease and increase the processing time. Nevertheless, this
may still be seen as an improvement, because B/F is generally suited for smaller updates [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ], all of
which benefit in our test.
          </p>
          <p>The reason for the performance gain is provided by the “deletions” column, which shows a greatly
reduced number of rule applications. For larger updates, however, this does not sufice as more facts in
an update also mean more marking-related overhead. In addition, the ratio of marked implicit facts
(“im.”) to marked explicit facts (“ex.”) is lower than for smaller updates, while the number of performed
deletion rules in the original B/F is also relatively small for larger updates, thus further hindering the
gain of the marking approach. Moreover, the number of applied rules during backward and forward
1https://github.com/M-Illich/dred-mark-eval
2https://www.swi-prolog.org/
deletions
backward forward insertions</p>
          <p>marks
old
new
both
both
both
ex.</p>
          <p>im.</p>
          <p>new
new
old
new
old
new
both
marks
ex. im.
0 114.58 106.22
1 20.57 20.29
2 10.46 10.24
7.3 859 754 4,329 4,400 3,299 3,299
1.4 684 544 2,697 2,733 2,317 2,317
2.2 558 400 1,484 1,479 1,206 1,206
chaining is actually higher with marking, especially for larger updates, which may be explained by
the change of order in which facts are checked for alternative derivations in line 17 of Algorithm 1,
due to the initialization of the set  based on marked implicit facts in line 32. The impact of the facts’
processing order on the number of performed rules is also the reason why the number of marked
implicit facts does generally not match with the number of prevented deletion rules. For example, a fact
might be already proven during the processing of another fact, before we determine that it is afected
by a deletion.</p>
          <p>The results for  are in Table 2. Due to the lack of alternative derivations, we cannot apply any
rules during the backward and forward chaining parts, which is why the prevention of rule applications
indicated in the “deletions” column has a higher influence on the performance improvement than for
 (see “reduction” column), although  could decrease the number of deletion rules much
more. In particular, even the larger updates did benefit from the marking approach. For the same reason,
the number of marked implicit facts equals the number of prevented deletion rules now.</p>
        </sec>
        <sec id="sec-5-2-2">
          <title>5.2.2. Real Data Results</title>
          <p>For the real test, we used predefined GPS tracks ( 0, 1, 2) to generate three streams
consisting of 55, 83, and 114 updates that add/delete around 18/17, 14/14, and 9/8 facts on average,
and 172/114, 152/167, and 65/74 facts at maximum, respectively. The results in Table 3 lead to similar
conclusions as the synthetic tests, with an overall better performance for the marking approach (see
“reduction” column) due to a decreased number of rule applications related to “deletions”. While the
undesired increase of applied backward chaining rules is much smaller than for , so is the number
of marked implicit facts, especially in proportion to the explicit ones, which is why we mainly obtain
small time improvements.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>We extended the Backward/Forward algorithm with marking as done previously for Delete/Rederive.
During the processing of an update, we directly take a look at the next available one and mark facts that
are deleted by this next update. With those marked explicit facts, we can already determine some facts
that have to be checked for alternative derivations during the processing of the next update, without the
need to apply rules for that. An evaluation based on both synthetic and real data tests showed that the
marking approach can accelerate the processing time, although improvements are often small. Future
work may involve further optimizations, like integrating heuristics to specify the processing order of
facts to prevent additional rule applications during backward and forward chaining, for instance.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
    <sec id="sec-8">
      <title>A. Algorithm – Correctness</title>
      <p>We show correctness of Algorithm 1 in a similar way to DRed with Marking. First, we prove that the
algorithm works correctly if the stream only consists of a single update, and then show that the
markingrelated computations do not have any influence on both the current update’s and the next update’s
dataset. For the following, we define that given two sequential updates  and , the “processing of
” refers to loop iterations in Algorithm 1 where 1 =  and 2 = , while the “processing of ”
refers to the subsequent, continuing iterations where 1 = . Furthermore, we say that a “derivation
path” for a fact  is a sequence of rule instances 1, ..., , where body(1) only contains explicit facts,
head() =  , and head() ∈ body(+1) for 1 ≤  &lt; .</p>
      <p>Let  = mat(, ) be the initial dataset and ′ = mat(, ′) be the correctly adapted dataset with
′ = ( ∖ − ) ∪ + for an update  = (− , +). We prove the correct processing of a single update
by showing that the algorithm’s deletion phase removes every fact from  ∖ ′, while the insertion
phase adds every new derivable fact from ′ ∖ , respectively. For the deletion phase, we first show the
following claims related to Algorithm 1:
Claim 1. A fact  is added to the set  if and only if it is also added to the set .</p>
      <p>Proof. (⇒) When a fact  occurs in  , it will eventually be selected in line 17. If  satisfies the
conditions in line 18, it will be added to  in the next line. The second condition, in line 20, can only be
satisfied for a limited number of times due to the finite number of rule instances and the prevention of
repetitions as the consequence of this operation contradicts its condition. Hence,  will eventually be
added to  based on line 22. The only way to prevent the selection of  in line 17 is its removal from
 . Whenever we remove a fact from  in lines 14, 19, and 22, however, we also add the fact to .</p>
      <p>(⇐) We only add a fact  to  in lines 14, 19, and 22, which can only be visited when  is also in
 (see lines 13 and 17).</p>
      <p>Claim 2. A fact  is added to the set  if and only if  ∈  ∩ ′.</p>
      <p>Proof. (⇒) A fact  is only added to  in lines 14 and 19, where  is also added to , or in line 15,
which already requires that  ∈ . The conditions for the former additions are that  is explicit and
not in − , or that  can be derived by proven facts from  . We show that  ∈  ⇒  ∈ ′ by
induction:</p>
      <p>As base case, we consider the initial situation where  is empty, so that we can only add explicit
facts that are not in − and, thus, still in ′ ⊆ ′. As induction step, we consider the case where we
add  by deriving it from facts in  . By our hypothesis, every fact from  is also in ′, so that the facts
derived from  have to be in ′ too.</p>
      <p>(⇐) If a fact  occurs in  ∩ ′, it was also processed as part of  based on Claim 1. If  is explicit,
it cannot be in − due to  ∈ ′, so that it will be added to  in line 19. We show that an implicit  is
added to  by induction over the length of  ’s derivation paths in  ∩ ′, because implicit facts in 
are based on rule instances from  (, ):</p>
      <p>For the base case, we assume that  has a derivation path of length 1, which means that there exists
a rule instance  with head() =  and body() ⊆  ∩ ′. Since  was processed as part of  and is
not explicit, we can apply  in line 20 to add every body fact to  . These body facts are in  ∩ ′, so
that they will be added to  in line 19 during the following algorithm iterations. Hence, we can also
apply  in line 11 and, thus, add  to  as well.</p>
      <p>For the induction step, we assume that  has a (shortest) derivation path of length  + 1. Accordingly,
there exists a rule instance  with head() =  , where every fact in body() has a (shortest) derivation
path of length ≤ . Due to  ∈ ′, we also have body() ⊆ ′. By Claim 1,  occurred in  , in which
case we can apply  in line 20 to add body() to  and, hence, to . Thus, we have body() ⊆  ∩ ′
and consequently body() ⊆  based on our hypothesis. This allows us to apply  in line 11 and, thus,
add  to  .</p>
      <p>With the above claims, we can show the correct behavior of the deletion phase:
Claim 3. A fact  is added to the set  if and only if  ∈  ∖ ′.</p>
      <p>Proof. (⇒) If  ∈  and  is explicit, then  ∈ − (cf. line 6) and, thus,  ∈  ∖ ′. For the case that
 is implicit, we know that  ∈  and  ̸∈  based on line 25. By Claim 2, this also requires that
 ̸∈ ′. Since the algorithm only works with rule instances from  (, ), we also have  ⊆ , and
hence  ∈  ∖ ′.</p>
      <p>(⇐) If  ∈  ∖ ′ and  is explicit, it has to appear in − and, thus, in  based on line 6. For the
case that  is implicit, we use a proof by induction over the derivation path length of  :</p>
      <p>For the base case, we assume that  has a derivation path of length 1, which means that there is a
rule instance , where head() =  and every fact from body() is explicit. Since  ∈  ∖ ′, the body
has to contain a fact from − , otherwise  would still be derivable in ′. Initially, we have  = − ,
so that we can apply  in line 23 and add  to  in the next line. By Claim 1,  will be added to  too.
Because  ̸∈ ′, we get  ̸∈  by Claim 2, so that  is added to  based on line 25.</p>
      <p>For the induction step, we assume that  has a (shortest) derivation path of length  + 1. Accordingly,
there exists a rule instance  with head() =  , where every fact in body() has a (shortest) derivation
path of length ≤ . Due to  ∈  ∖ ′, we have body() ∩ ( ∖ ′) ̸= ∅. By our hypothesis, we then get
body() ∩  ̸= ∅, so that we can apply  in line 23 to add  to  and, consequently, to  by Claim 1.
Due to  ̸∈ ′, we get  ̸∈  by Claim 2, so that  will be added to  based on line 25.</p>
      <p>Next, we show the correct behavior of the algorithm’s insertion phase:
Claim 4. Let * be the updated dataset  of line 26, and * the corresponding set of explicit facts. When
we reach line 33, we have  = mat(, * ).</p>
      <p>Proof. As long as phase = 2, we repeatedly look for rule instances in  where the head is not yet
present, and add this head to  (cf. lines 28 and 30). Accordingly, we extend  with  (), as defined in
Section 2. Repeating this until the dataset does not change anymore, i.e., once we cannot add any new
fact, will eventually lead to  = mat(, * ) as defined in Section 2.1.</p>
      <p>Based on this, we can now prove that the algorithm correctly processes a single update:
Lemma 1 (Single update). Given a Datalog program  , a materialized dataset  = mat(, 0), and a
stream of updates ^ = ⟨ ⟩ with  = (− , +), Algorithm 1 returns as output a stream ⟨mat(, 1)⟩
with 1 = (0 ∖ − ) ∪ +.</p>
      <p>Proof. By Claim 3, we obtain  = mat(, 0) ∖ mat(, 1) once we cannot extend  any further, in
which case line 26 is executed and every fact that cannot be derived anymore in mat(, 1) is deleted.
In the same line, we also add + to , so that we eventually obtain  = mat(, 1) once we reach
line 33, based on Claim 4.</p>
      <p>Knowing that Algorithm 1 works correctly in the classical way, where we do not take a look at the
next update, we next show that the marking-related computations do not interfere with the computation
of the dataset. First, we consider the case for the current update 1:
Lemma 2 (No influence on current dataset) . Given two sequential updates  and  from ^ in
Algorithm 1, the marks that are computed due to  during the processing of  do not influence the
dataset  that is returned at the end of processing .</p>
      <p>Proof. The procedures mark, unmark and the function getMarked do neither add nor remove facts.
The only cases where a marked fact is required as a condition to perform some operation are in lines 12
and 29, where we just mark a fact and, thus, do not change the dataset.</p>
      <p>To prove that the dataset of the next update 2 will not be afected by the markings either, we first
show the following claim:
Claim 5. The order in which we add facts to the set  does not have an influence on the dataset  that is
returned at the end of an update’s processing.</p>
      <p>Proof. Lines 20 and 23 do not specify what rule we may choose if several ones are applicable, so that the
order in which we add facts to  can be diferent each time we use the algorithm. Based on Lemma 1,
however, we know that the algorithm still produces the correct materialized dataset.</p>
      <p>Now, we can prove the following lemma:
Lemma 3 (No influence on next dataset) . Given two sequential updates  and  from ^ in Algorithm 1,
the marks that are computed due to  during the processing of  do not influence the dataset  that is
returned at the end of processing .</p>
      <p>Proof. Marked facts influence the next update’s processing based on the initialization of the set  in
line 32. Here, we only add implicit facts (by removing 2− ), which can only be marked in lines 12 and 29.
In both cases, we mark the head of a rule instance, where the body contains a marked explicit fact.
Based on line 9, we only mark explicit facts that occur in 2− . Since we use 2− to initialize the set 
when we prepare the next update’s processing in line 32, we could apply those rule instances in line 23
during the next update’s processing if we would start with an empty set  . This means that every fact
that we add to  in line 32 based on markings, would also be added to  if we processed the next
update in the classical way, where we consider each update alone, without any marking. By Claim 5,
changing the order of fact additions to  based on its initialization does not afect the resulting dataset,
so that we obtain the same dataset  at the end as if we would process the next update alone, which is
correct based on Lemma 1.</p>
      <p>Combining the above lemmas, we can show correctness of Algorithm 1 as follows:
Theorem 1. Given a Datalog program  , a materialized dataset  = mat(, 0), and a stream of
updates ^ = ⟨1, 2, ...⟩, Algorithm 1 returns as output a stream ⟨mat(, 1), mat(, 2), ...⟩ with
 = (− 1 ∖ − ) ∪ + for each  = (− , +) ∈ ^ .</p>
      <p>Proof. We prove this by induction: As base case, we have the first update 1 ∈ ^ , for which the
algorithm correctly computes mat(, 1) based on Lemmas 1 and 2. In the induction step, we consider
two sequential updates , +1 ∈ ^ . By hypothesis, we get the correct result for . By Lemma 3,
the related computations do not afect the result of +1, so that we also obtain the correct dataset
mat(, +1) by hypothesis.</p>
      <p>edge(, )
edge(, ), path(, )
→ path(, )
→ path(, )
(a) Rules for Datalog program trans
nextInWay(, 1, 1), nextInWay(, 2, 2), (1, 2)
nextInWay(, 1, 1), nextInWay(2, , 2), (1, 2)
nextInWay(1, , 1), nextInWay(, 2, 2), (1, 2)
nextInWay(1, , 1), nextInWay(2, , 2), (1, 2)
connection(, ), connection(, )</p>
      <p>The atom nextInWay(, , ) states that the node  is followed by the node  on the way .</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Abiteboul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vianu</surname>
          </string-name>
          , Foundations of Databases, Addison-Wesley,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Piro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Horrocks</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>Olteanu, Parallel Materialisation of Datalog Programs in Centralised, Main-Memory RDF Systems</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>28</volume>
          , AAAI Press,
          <year>2014</year>
          , pp.
          <fpage>129</fpage>
          -
          <lpage>137</lpage>
          . URL: https://doi.org/10.1609/aaai.v28i1.
          <fpage>8730</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Carral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Koopmann</surname>
          </string-name>
          ,
          <article-title>From Horn-SRIQ to Datalog: A Data-Independent Transformation That Preserves Assertion Entailment</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>33</volume>
          , AAAI Press,
          <year>2019</year>
          , pp.
          <fpage>2736</fpage>
          -
          <lpage>2743</lpage>
          . URL: https://doi.org/10.1609/aaai.v33i01.
          <fpage>33012736</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. S.</given-names>
            <surname>Mumick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. S.</given-names>
            <surname>Subrahmanian</surname>
          </string-name>
          , Maintaining Views Incrementally,
          <source>in: Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data</source>
          , ACM Press,
          <year>1993</year>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>166</lpage>
          . URL: https://doi.org/10.1145/170035.170066.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E. F.</given-names>
            <surname>Piro</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          <article-title>Horrocks, Incremental Update of Datalog Materialisation: the Backward/Forward Algorithm</article-title>
          ,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>29</volume>
          , AAAI Press,
          <year>2015</year>
          , pp.
          <fpage>1560</fpage>
          -
          <lpage>1568</lpage>
          . URL: https://doi.org/10.1609/aaai.v29i1.
          <fpage>9409</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Illich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Glimm</surname>
          </string-name>
          ,
          <article-title>Delete/Rederive with Marking for Update Streams</article-title>
          ,
          <source>in: The Semantic Web</source>
          , volume
          <volume>15718</volume>
          of Lecture Notes in Computer Science, Springer Nature Switzerland, Cham,
          <year>2025</year>
          , pp.
          <fpage>152</fpage>
          -
          <lpage>168</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -94575-
          <issue>5</issue>
          _
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Eisinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Eiter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Furche</surname>
          </string-name>
          , G. Gottlob,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Linse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pichler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <source>Foundations of Rule-Based Query Answering</source>
          , Springer Berlin Heidelberg,
          <year>2007</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>153</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>540</fpage>
          -74615-
          <issue>7</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>B.</given-names>
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Piro</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Horrocks</surname>
          </string-name>
          ,
          <source>Maintenance of Datalog Materialisations Revisited, Artificial Intelligence</source>
          <volume>269</volume>
          (
          <year>2019</year>
          )
          <fpage>76</fpage>
          -
          <lpage>136</lpage>
          . URL: https://doi.org/10.1016/j.artint.
          <year>2018</year>
          .
          <volume>12</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Motik</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Horrocks</surname>
          </string-name>
          , Optimised Maintenance of Datalog Materialisations,
          <source>in: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          , volume
          <volume>32</volume>
          , AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>1871</fpage>
          -
          <lpage>1879</lpage>
          . URL: https://doi.org/10.1609/aaai.v32i1.
          <fpage>11554</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Terdjimi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Médini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mrissa</surname>
          </string-name>
          ,
          <article-title>Web Reasoning Using Fact Tagging</article-title>
          ,
          <source>in: Companion Proceedings of the The Web Conference</source>
          <year>2018</year>
          , WWW '18,
          <string-name>
            <given-names>International</given-names>
            <surname>World Wide Web Conferences Steering Committee</surname>
          </string-name>
          , Republic and Canton of Geneva, CHE,
          <year>2018</year>
          , p.
          <fpage>1587</fpage>
          -
          <lpage>1594</lpage>
          . doi:
          <volume>10</volume>
          .1145/3184558. 3191615.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <article-title>Optimising Ontology Stream Reasoning with Truth Maintenance System</article-title>
          ,
          <source>in: Proceedings of the 20th ACM International Conference on Information and Knowledge Management</source>
          , ACM Press,
          <year>2011</year>
          , pp.
          <fpage>831</fpage>
          -
          <lpage>836</lpage>
          . URL: https://doi.org/10.1145/2063576.2063696.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Urbani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Margara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J. H.</given-names>
            <surname>Jacobs</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. van Harmelen</surname>
          </string-name>
          ,
          <string-name>
            <surname>H. E. Bal,</surname>
          </string-name>
          <article-title>DynamiTE: Parallel Materialization of Dynamic RDF Data</article-title>
          ,
          <source>in: The Semantic Web - ISWC</source>
          <year>2013</year>
          , volume
          <volume>8218</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2013</year>
          , pp.
          <fpage>657</fpage>
          -
          <lpage>672</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>642</fpage>
          -41335-3_
          <fpage>41</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Barbieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Braga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ceri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Valle</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Grossniklaus, Incremental Reasoning on Streams and Rich Background Knowledge, in: The Semantic Web: Research and Applications</article-title>
          , volume
          <volume>6088</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2010</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . URL: https://doi.org/10.1007/ 978-3-
          <fpage>642</fpage>
          -13486-
          <issue>9</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Dell'Aglio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Valle</surname>
          </string-name>
          ,
          <article-title>Incremental Reasoning on RDF Streams, in: Linked Data Management, Chapman</article-title>
          and Hall/CRC,
          <year>2014</year>
          , pp.
          <fpage>413</fpage>
          -
          <lpage>435</lpage>
          . URL: http://dellaglio.org/preprints/14b1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>T.</given-names>
            <surname>Frühwirth</surname>
          </string-name>
          , Constraint Handling Rules, Cambridge University Press,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>