<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>NeSy4PPM: A Python Library for Neuro-Symbolic Predictive Process Monitoring</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>JamilaOukharjane</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan Donadell</string-name>
          <email>ivan.donadello@unibz.i</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>oand Fabrizio MariaMaggi</string-name>
          <email>aggi@inf.unibz.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Download/Demo URL</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Documentation URL</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Microsoft Windows</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>GNU/Linux</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Predictive Process Monitoring, Deep Learning, Symbolic Background Knowledge, Python API</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Engineering, Free University of Bozen-Bolzano</institution>
          ,
          <addr-line>NOI Techpark - via Bruno Buozzi, 1, Bolzano,39100</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>20</fpage>
      <lpage>24</lpage>
      <abstract>
        <p>NeSy4PPM is the first Python-based library for Predictive Process Monitoring (PPM) that integrates neural models with symbolic background knowledge to improve sufix prediction under specific contextual circumstances. It supports sufix prediction while ensuring compliance with various types of background knowledge, including declare, MP-Declare, ProbDeclare, and procedural models such as Petri nets and BPMN. In this paper, we present the functionalities of NeSy4PPM and empirically evaluate its performance in terms of prediction eficiency, compliance with the input background knowledge, and overall efectiveness.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Value</title>
    </sec>
    <sec id="sec-2">
      <title>NeSy4PPM</title>
      <p>0.1
https://nesy4ppm.readthedocs.io/en/latest/
https://github.com/JamilaOUKHARIJANE/NeSy4PPM
https://youtu.be/ig4QQGxu49M</p>
      <sec id="sec-2-1">
        <title>1. Introduction</title>
        <p>Predictive Process Monitoring (PPM) has received growing attention in recent years as organizations
increasingly aim to anticipate the future evolution of ongoing process instances using historical
execution data 1[]. One of the main tasks in this domain is sufix prediction, which aims to forecast the
remaining sequence of activities until a trace is completed.</p>
        <p>Although various tools and frameworks have been developed for sufix prediction, they primarily
focus on stationary processes (i.e., processes that do not evolve over time) and rely mainly on neural
models, lacking support for incorporating background knowleℬd ge )(, such as constraints or logical
rules in the prediction taskN. eSy4PPM is the first general-purpose Python library for Neuro-Symbolic
PPM that supports both stationary and evolving processes, i.e., those that evolve due to concept drift. It
addresses sufix prediction for both activity sequences and activity–resource pairs by combining neural
models (e.g., LSTM, Transformer) with diverse types ℬof , including probabilistic, declarative, and
procedural knowledge.</p>
        <p>NeSy4PPM provides an end-to-end pipeline, from data preprocessing to prediction evaluation, and
enables users to integrate symbolℬic</p>
        <p>to enhance neural predictions at testing time and better
manage process changes over time. The library is modular, extensible, and designed to support both
single-attribute (activity) and multi-attribute (activity and resource) sufix prediction tasks.</p>
        <p>CEUR</p>
        <p>ceur-ws.org</p>
        <sec id="sec-2-1-1">
          <title>LSTM</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>Transformer</title>
        </sec>
        <sec id="sec-2-1-3">
          <title>2. Training</title>
        </sec>
        <sec id="sec-2-1-4">
          <title>Search strategy</title>
          <p>Probabilistic declarative BK type</p>
        </sec>
        <sec id="sec-2-1-5">
          <title>3. Prediction</title>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2. The NeSy4PPM Library</title>
        <p>&gt; is the sub-trace obtained by removing the prefix, i.e., &gt; = ⟨  ⟩
log is a set of traces, where each tra creepresents an execution of a business process. An event is a
tuple(, , , )
where is the case id, ∈</p>
        <p>is the activity name, is the timestamp of the event, and
 ∈ ℛ is the allocated resource for activ i.tyWe denote the components of an event= (, , , )
the notation.
, .
, . , and. . Given a trace = ⟨ 1,  2, … ,   ⟩, the prefix  ≤ of length ∈ {1, … , }

the sub-trace including the first events of  , i.e.,  ≤ = ⟨  ⟩=1 , with | ≤ | =  . The corresponding sufix
using</p>
        <p>is

=+1 , with | &gt; | =  −  .</p>
        <p>xes
csv</p>
        <sec id="sec-2-2-1">
          <title>Separate training test logs xes.gz xes csv</title>
        </sec>
        <sec id="sec-2-2-2">
          <title>Single event log xes.gz One-hot</title>
        </sec>
        <sec id="sec-2-2-3">
          <title>Index-based</title>
        </sec>
        <sec id="sec-2-2-4">
          <title>Multi-encoders</title>
        </sec>
        <sec id="sec-2-2-5">
          <title>1. Data loading</title>
          <p>1. Data preprocessing 2. Prefixes extraction</p>
        </sec>
        <sec id="sec-2-2-6">
          <title>NeSy4PPM</title>
          <p>3. Prefixes encoding Shrinked index-based</p>
        </sec>
        <sec id="sec-2-2-7">
          <title>4. Evaluation</title>
        </sec>
        <sec id="sec-2-2-8">
          <title>Similarity</title>
        </sec>
        <sec id="sec-2-2-9">
          <title>Fitness</title>
        </sec>
        <sec id="sec-2-2-10">
          <title>Compliance</title>
        </sec>
        <sec id="sec-2-2-11">
          <title>Computation time</title>
        </sec>
        <sec id="sec-2-2-12">
          <title>Jaccard</title>
        </sec>
        <sec id="sec-2-2-13">
          <title>Damerau-Levenshtien</title>
        </sec>
        <sec id="sec-2-2-14">
          <title>BK-contextualized prediction</title>
        </sec>
        <sec id="sec-2-2-15">
          <title>Prediction mode</title>
        </sec>
        <sec id="sec-2-2-16">
          <title>Greedy search</title>
        </sec>
        <sec id="sec-2-2-17">
          <title>Beam search</title>
        </sec>
        <sec id="sec-2-2-18">
          <title>Purely neural prediction</title>
        </sec>
        <sec id="sec-2-2-19">
          <title>BK-filtered prediction</title>
        </sec>
        <sec id="sec-2-2-20">
          <title>Declare</title>
        </sec>
        <sec id="sec-2-2-21">
          <title>MP-Declare</title>
        </sec>
        <sec id="sec-2-2-22">
          <title>BPMN</title>
        </sec>
        <sec id="sec-2-2-23">
          <title>Petri nets</title>
        </sec>
        <sec id="sec-2-2-24">
          <title>Activity</title>
        </sec>
        <sec id="sec-2-2-25">
          <title>Declarative</title>
        </sec>
        <sec id="sec-2-2-26">
          <title>Procedural</title>
        </sec>
        <sec id="sec-2-2-27">
          <title>Activity and Resource</title>
        </sec>
        <sec id="sec-2-2-28">
          <title>Prediction target</title>
          <p>2.1. Data Preprocessing
The Data Preprocessing package transforms event logs into neural-compatible inputs. It handles event
log loading, prefix extraction, and encoding into formats suitable for Neural Network (NN) models:
1. Data loading. This step supports two configurations: (i) asingle event log, automatically split
into training and test sets based on the chronological order of trace start timestamps and a
user-defined train/test ratio; or (ii) a pair osefparate training and test logs. Logs can be provided
in .xes, .xes.gz, or.csv format.
2. Prefix extraction.</p>
          <p>All possible prefixes are extracted from the training log traces. Each prefix
is turned into an input–label pa(ir,)</p>
          <p>for NN training. Given a trac e= ⟨ 1, … ,   ⟩, this step
generates training pairs by selecting prefixe⟨s  ⟩=1 as input  , and the subsequent event +1 as</p>
          <p>the target labe l, for each = 1, … ,  − 1 . Specifically,  +1 . serves as the next activity lab e l,
and +1 . as the next resource label.
3. Prefixes encoding. Since NN inputs must be numerical, prefixes are transformed into feature
vectors using one of the following methodso:ne-hot, index-based, shrinked index-based, or
multi-encoders, which we describe in more detail below. Since vectorhsave variable lengths,
zero-padding is applied to ensure fixed-size inputs for NN training.
matrix of these row vectors.</p>
          <p>In one-hot encoding, each even t  = ⟨  ,   ⟩ is represented by a binary vect or= [  , v


 ], wherev
andv are one-hot vectors for activit y ∈  and resourc e ∈ ℛ. The feature vecto r for prefix  is a


In index-based encoding, each even t  is represented a s = [
 , 

 ], where

 and 

 are

indices in</p>
          <p>andℛ. The feature vecto r is the concatenation of th e vectors. Both encodings can be
adapted for activity-only sequences by omitting the resource vector.</p>
          <p>In shrinked index-based encoding, a unique integer index is assigned to each activity–resource pair
 , resulting in  =  
(  ,   ). The feature vecto r is then
(  ,   ) using a joint index function
obtained by concatenating the values.
embeddings are computed a s = Ẽ  ⋅   + Ẽ  ⋅   , whereẼ  = E ⊕ (E ⊗ E ) andẼ  = E ⊕ (E ⊗ E ), with</p>
          <p>In multi-encoders, separate embeddingsE andE are created for activities and resources. Combined
  and  as alignment weights computed via a shared modulat3o]r. [
2.2. Training
For training, NeSy4PPM implements state-of-the-art NN architectures, supporting both LSTM (Long
Short-Term Memory)4[] and Transformer3[] models. This component is responsible for learning
predictive models from the encoded input d ataand corresponding target lab el.sIn its basic
configuration, the selected NN model learns to predict the next activ̂ i.tyIf the target label s also include
resource information , the NN is trained in a multi-output setting to jointly predict the next activity
 ̂ and its associated resour c ê.
2.3. Prediction
Prediction in NeSy4PPM is performed using an autoregressive Symbolic[Neuro] component according
to Kautz’s taxonomy 5[, Section 2], where anNN model is invoked as a subroutine within a symbolic
reasoner that checks the compliance of tNheN predictions withℬ (see Figure2). This component
takes as input a prefix  ≤ , a trained NN model, anℬd .</p>
          <p>The prediction process begins by encoding the input prefi x ≤ using the same encoding method
applied during training. The encoded prefix is then passed to thsearch predictor, which performs
sufix prediction via beam search in an autoregressive manner. At each prediction stℎe,pthe predictor
evaluates possible continuations based on: (i) the probability distribution over next activities and
resources predicted by thterained NN, and (ii) aℬ</p>
          <p>
            compliance score. These are combined using a
weight  ∈ [
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ] to balance prediction likelihood and compliance:

 ≤+ℎ ⋅ =  ( NN( ≤+ℎ−1 , ), NN( ≤+ℎ−1 , )) 1− ⋅  (⟨ ≤+ℎ−1 , , ⟩, ℬ )

(1)
          </p>
          <p>Here,  ( NN( ≤+ℎ−1 , ), NN( ≤+ℎ−1 .)) computes an aggregation (we implemented the average,
but other strategies are possible) of the predicted probabilities for the next activity and resource,
and the compliance function(⟨
≤+ℎ−1 , , ⟩, ℬ )</p>
          <p>
            returns a score in[
            <xref ref-type="bibr" rid="ref1">0, 1</xref>
            ], indicating how well the
predicted continuation satisfiesℬ . Our implementation supports several conformance checkers for
 , including: theMP-Declare conformance checker6][ for declarativℬe , the ProbDeclare checker
for probabilistic declaratiℬve
          </p>
          <p>
            [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ], and the procedural conformance check8e]rfo[r proceduraℬl 
such as Petri nets and BPMN. At each step, the selected activity and resource are appended to the prefix,
and the search continues until a termination symbo⊥l) i(s predicted.
          </p>
          <p>The search predictor is modular and configurable, supporting various settings across four dimensions.
Prediction modes include purely neural prediction (wit=h 0 ), BK-contextualized predictio n&gt;( 0 ),
and BK-filtered prediction, in which the full predicted trace is retained only if compliantℬw ith
upon termination.Search strategies include beam search for exploring multiple candidate continuations
and greedy search for selecting only the most likely continuation. The predictor supports diℬferent
types as described above. Finally, this NeSy component supports variporuedsiction targets, allowing for
prediction of both activity and resource, or activity only. This flexible configuration enables adaptation
to a wide range of predictive monitoring scenarios and knowledge models.
2.4. Evaluation
To evaluate the performance of neuro-symbolic predictions,Etvhaeluation package supports the
following metrics: (i) theDamerau-Levenshtein Similarity (DLS) between the predicted sufix  ̂&gt; and
the ground truth sufix  &gt; ; (ii) the Jaccard similarity between the predicted sufix  ̂&gt; and the ground
truth sufix</p>
          <p>&gt; ; (iii) the Compliance, defined as the proportion of predicted traces (i.e., the prefix
concatenated with the predicted sufix) that satisfyℬ
(i.e., declare orMP-Declare constraints); (iv)
the Fitness, representing the degree to which the predicted traces align with a procedural model; and (v)
Computation time, referring to the average and standard deviation of prediction times.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>3. Maturity</title>
        <p>The maturity of NeSy4PPM is demonstrated by its support for a broad range of features, including:
(i) multipleencoding methods; (ii) neural architectures, including LSTM and Transformer models; (iii)
symbolicℬ</p>
        <p>representations, including declarative modedlesc(lare, MP-Declare, ProbDeclare)
and procedural models (Petri nets and BPMN); (iv) predicstieoanrch strategies, such as greedy and beam
search; (v)prediction modes, including purely neuraℬl,  -contextualized, andℬ -filtered predictions;
and (vi)prediction targets, supporting both single-attribute (activity) and multi-attribute (activity and
resource) prediction tasks. These features enable flexible and comprehensive evaluation across diverse
experimental configurations.</p>
        <p>In addition, we evaluated the computational performance and prediction accurNaecSyyo4PfPM using
two real-world event logs: Helpd1esaknd Request For Paymen2t, under concept drift assumptions. For
each log, traces were chronologically sorted: the first 80% were used for training, and the remaining
20% for testing. To simulate concept drift,ℬ</p>
        <p>was mined from under-represented behaviors in the</p>
        <p>
          Encoder
One-hot
Index-based [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
Shrinked index-based
Multi-encoders 3[]
        </p>
        <p>One-hot
seeerrcaydhG ++( MP-Declare) IISSOMnnhhnddurreeelii-nnxxthi--kk-bboeeeaatnddssceeiionndddddeeerxxs--bbaasseedd</p>
        <p>Multi-encoders
one-hot
Index-based
Shrinked index-based
Multi-encoders</p>
        <p>
          One-hot [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]
seercaahBm ++( MP-Declare) IISSOMnnhhnddurreeelii-nnxxthi--kk-bboeeeaatnddssceeiionndddddeeerxxs--bbaasseedd
        </p>
        <p>Multi-encoders
training set using theDeclareMiner3 tool, selecting constraints with less than 20% support. We extracted
two MP-Declare constraints for the Helpdesk log and eight for the Request For Payment log. We
then retained only those traces in the test sets that were compliant with the respective constraints.
This resulted inMP-Declare-compliant test sets containing 820 traces for Helpdesk and 445 traces for
Request For Payment.</p>
        <p>We performed experiments using diferent prediction configurations, including Greedy Search )(
and Beam Search ( ) with a beam size of = 3 . For each strategy, we evaluated: (i) purely neural
predictions, (ii) filtered predictions (   ), and (iii) contextualized predictions (  ). All
experiments used a Transformer architecture as the NN model.</p>
        <p>
          Table1 reports the average and standard deviation of prediction times (in seconds), as well as
prediction accuracy using the DLS and compliance metrics for both activity and resource prediction,
across all combinations of prediction metho d s –[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ],  +   ,  +  ,  ,  +  
[
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], and +   – and all considered encoders.
        </p>
        <p>From the table, we observe that while the use  of  in combination with greedy or beam
search introduces additional computational overhead duℬe toconformance checking, it leads to
a clear improvement in prediction accuracy. These results highlight the efectiveness of integrating
symbolicℬ into the prediction algorithm. Notabl y+,   achieves the best DLS
scores for both activity and resource prediction, as well as the highest compliance on both logs. We
acknowledge that this comes at the cost of increased computational time.</p>
        <p>In contrast, the purely neural predictions methods a(nd  ) struggle with resource prediction,
confirming that NN models alone fail to capture drifts in resource allocation. Alth o +u gh   and
 +    methods apply symbolicℬ as a post-prediction filter, their overall performance remains
similar to the purely neural predictions methods in terms of both prediction accuracy and compliance.
This limited improvement can be attributed to the restricted beam size: when activity-resource pairs
representing drift are rare or unseen during training, the NN model assigns them low probabilities. As
a result, these pairs are often excluded from the beam due to the prioritization of higher-probability
candidates.</p>
        <p>As future work, we plan to implement multi-threading to parallelize the beam search to reduce the
computational time. Furthermore, we aim to integrate fu zzy  compliance checking to support
reasoning under uncertainty10[], incorporateℬ into the training phase via the loss function11[],
and include explanation techniques to enhance interpretability.
This study was partially funded by the European Union - NextGenerationEU, in the framework of the
iNEST - Interconnected Nord-Est Innovation Ecosystem (iNEST ECS00000043 – CUP I43C22000250006).
The views and opinions expressed are solely those of the authors and do not necessarily reflect those of
the European Union, nor can the European Union be held responsible for them. The study was also
supported by Fondazione Cariverona within the ReSS-Pro project.</p>
      </sec>
      <sec id="sec-2-4">
        <title>Declaration on Generative AI</title>
        <p>The authors have not employed any Generative AI tools.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ceravolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Comuzzi</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. De Weerdt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Di Francescomarino</surname>
            ,
            <given-names>F. M.</given-names>
          </string-name>
          <string-name>
            <surname>Maggi</surname>
          </string-name>
          ,
          <article-title>Predictive process monitoring: concepts, challenges, and future research directions</article-title>
          ,
          <source>Process Science</source>
          <volume>1</volume>
          (
          <year>2024</year>
          )
          <article-title>2</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Oukharijane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Donadello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Maggi</surname>
          </string-name>
          ,
          <article-title>A general framework for neuro-symbolic predictive process monitoring</article-title>
          ,
          <source>in: Business Process Management Workshops, Lecture Notes in Business Information Processing</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Lazo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ñanculef</surname>
          </string-name>
          <article-title>, Multi-attribute transformers for sequence prediction in business process management</article-title>
          ,
          <source>in: DS</source>
          , volume
          <volume>13601</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>184</fpage>
          -
          <lpage>194</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tax</surname>
          </string-name>
          , I. Verenich,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Rosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dumas</surname>
          </string-name>
          ,
          <article-title>Predictive business process monitoring with LSTM neural networks</article-title>
          ,
          <source>in: CAiSE</source>
          , volume
          <volume>10253</volume>
          oLfecture Notes in Computer Science, Springer,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Sarker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Eberhart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hitzler</surname>
          </string-name>
          ,
          <article-title>Neuro-symbolic artificial intelligence: Current trends</article-title>
          ,
          <source>AI</source>
          Communications
          <volume>34</volume>
          (
          <year>2022</year>
          )
          <fpage>197</fpage>
          -
          <lpage>209</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Burattin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Maggi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sperduti</surname>
          </string-name>
          ,
          <article-title>Conformance checking based on multi-perspective declarative process models</article-title>
          ,
          <source>Expert Syst. Appl</source>
          .
          <volume>65</volume>
          (
          <year>2016</year>
          )
          <fpage>194</fpage>
          -
          <lpage>211</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Alman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Maggi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peñaloza</surname>
          </string-name>
          ,
          <article-title>Probabilistic declarative process mining</article-title>
          ,
          <source>Inf. Syst</source>
          .
          <volume>109</volume>
          (
          <year>2022</year>
          )
          <fpage>102033</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Berti</surname>
          </string-name>
          ,
          <string-name>
            <surname>W. M. P. van der Aalst</surname>
          </string-name>
          ,
          <article-title>Reviving token-based replay: Increasing speed while improving diagnostics</article-title>
          , in: ATAED@Petri Nets/ACSD,
          <year>2019</year>
          , pp.
          <fpage>87</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Di Francescomarino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ghidini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Maggi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Petrucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yeshchenko</surname>
          </string-name>
          ,
          <article-title>An eye into the future: Leveraging a-priori knowledge in predictive business process monitoring</article-title>
          ,
          <source>in: BPM</source>
          , Springer,
          <year>2017</year>
          , pp.
          <fpage>252</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Donadello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Felli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Innes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Maggi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <article-title>Conformance checking of fuzzy logs against declarative temporal specifications</article-title>
          ,
          <source>in: BPM</source>
          , volume
          <volume>14940</volume>
          ,
          <year>2024</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>E.</given-names>
            <surname>Umili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. P.</given-names>
            <surname>Licks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Patrizi</surname>
          </string-name>
          ,
          <article-title>Enhancing deep sequence generation with logical temporal knowledge</article-title>
          ,
          <source>in: PMAI@ECAI</source>
          , volume
          <volume>3779</volume>
          oCfEUR Workshop Proceedings, CEUR-WS.org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>