=Paper= {{Paper |id=Vol-1757/paper4 |storemode=property |title=Model and Event Log Reductions to Boost the Computation of Alignments |pdfUrl=https://ceur-ws.org/Vol-1757/paper4.pdf |volume=Vol-1757 |dblpUrl=https://dblp.org/rec/conf/simpda/TaymouriC16 }} ==Model and Event Log Reductions to Boost the Computation of Alignments== https://ceur-ws.org/Vol-1757/paper4.pdf
     Model and Event Log Reductions to Boost
         the Computation of Alignments

                     Farbod Taymouri and Josep Carmona

              Universitat Politècnica de Catalunya, Barcelona (Spain)
                        {taymouri,jcarmona}@cs.upc.edu



      Abstract. The alignment of observed and modeled behavior is a piv-
      otal issue in process mining because it opens the door for assessing the
      quality of a process model, as well as the usage of the model as a precise
      predictor for the execution of a process. This paper presents a novel tech-
      nique for reduction of a process model based on the notion of indication,
      by which, the occurrence of an event in the model reveals the occurrence
      of some other events, hence relegating the later set as less important in-
      formation when model and log alignment is computed. Once indications
      relations are computed in the model, both model and log can be reduced
      accordingly, and then fed to the state of the art approaches for com-
      puting alignments. Finally, the (macro)-alignment derived is expanded
      in these parts containning high-level events that represent a set of indi-
      cated events, by using an efficient algorithm taken from bioinformatics
      that guarantees optimality in the local parts of the alignment. The imple-
      mentation of the presented techniques shows a significant reduction both
      in computation time and in memory usage, the latter being a significant
      barrier to apply the alignment technology on large instances.


1   Introduction
Nowadays many systems generate event logs, which are footprints left by pro-
cess executions. Process mining delves into this information, and examines it
to extract, analyze and enhance evidence-based process models [10]. One of the
challenges in process mining is how to align a process model to a set of traces
forming an event log. Given a trace representing a real process execution, an op-
timal alignment provides the best trace the process model can provide to imitate
the observed trace [1]. Alignments are crucial for important metrics like fitness,
precision and generalization [1,2].
    This paper presents a model-based technique for reduction of a process model
and observed behavior that both preserves the semantics of the process model
and retains the information of the original observed behavior as much as pos-
sible. The technique is meant to fight the main problem current approaches for
alignment computation have: the complexity both in space and time. The overall
idea relies on the notion of indication between activities of the process model
when it is represented as a Petri net. An indication relation between a set of
transitions (indicated set) and another transition (indicator) denotes a causal




                                         50
firing relation in the model, which expresses that the presence in any model’s
sequence of the indicator transition requires the presence of the indicated set
as well. The notion of indication is inspired from the reveals relation from [3].
We use a well-known technique to find logically independent parts of a graph
(known as SESEs in [7]), which are then used to gather indication relations effi-
ciently. These relations dictate which parts of a process model are abstracted as
a single, high-level node. Once the model is reduced, the observed trace to align
is projected (hence, reduced as well) into the reduced model’s alphabet. This
way, not only the model but also the trace are reduced, which in turn makes the
alignment techniques to be significantly alleviated, specially for well-structured
process models where many indication relations may exist. Once alignments are
computed, the final step is also an interesting contribution of this paper: to
cast the well-known Needleman–Wunsch algorithm [6] to expand locally each
high-level part of the alignment computed, using the indication relation.


2   Related Work

The seminal work in [1] proposed the notion of alignment, and developed a tech-
nique to compute optimal alignments for a particular class of process models.
For each trace σ in the log, the approach consists on exploring the synchronous
product of model’s state space and σ. In the exploration, the shortest path is
computed using the A∗ algorithm, once costs for model and log moves are de-
fined. The approach is implemented in ProM, and can be considered as the
state-of-the-art technique for computing alignments. Several optimizations have
been proposed to the basic approach to speed up and improve memory con-
sumption. The recent work in [9] proposed a divide and conquer strategy based
on Integer Linear Programming (ILP) approach to compute approximate align-
ments. Despite its memory and time efficiency, it cannot guarantee the obtention
of an (optimal) alignment.
    The work in [5] presented a decomposition approach using SESEs for con-
formance checking of the model and observed behavior. The proposed approach
decomposes a given model to smaller parts via SESE and then applies confor-
mance checking for each part independently. This technique is very efficient, but
the result is decisional (a yes/no answer on the fitness of the trace). Recently
[12] proposed a new approach which provides an algorithm that is able to obtain
such an optimal alignment from the decomposed alignments if this is possible,
which is called proper optimal alignment. Otherwise, it produces a so-called
pseudo-alignment which as in the case of [9], may not be executable in the net.
    The seminal work [4] first introduced the notion of reveals relation, which
determines that whenever an action a happens, then the occurrence of another
action b is inevitable. The notion of indication of this paper is inspired on the
reveals relation.
    The Refined Process Structure Tree (RPST), proposed by [11], is a graph
parsing technique that provides well-structured parts of a graph. The resulting
parse tree is unique and modular, i.e., local change in the local workflow graph




                                       51
results in a local change of the parse tree. It can be computed in linear time us-
ing the method proposed in [8] which is based on the triconnected components
of a given biconnected graph. The proposed approach only works with single
sink, single source workflow graphs which hampers its applicability to real world
problems with many sink, source nodes. The work in [7] presents a more effi-
cient way to compute RPST which can deal with multiple source, sink workflow
graphs.


3     Preliminaries

A Petri Net is a 3-tuple N = hP, T, Fi, where P is the set of places, T is the set
of transitions, P ∩ T = ∅, F : (P × T ) ∪ (T × P ) → {0, 1} is the flow relation.
Marking of a Petri net represents the number of tokens each place has. Given a
node x ∈ P ∪ T , its pre-set and post-set (in graph adjacency terms) are denoted
by • x and x• respectively. WF-net is a Petri net where there is a place start
(denoting the initial state of the system) with no incoming arcs and a place end
(denoting the final state of the system) with no outgoing arcs, and every other
node is within a path between start and end. Fig. 1(a) represents a WF-net.
Given an alphabet of events T = {t1 , . . . , tn }, a trace is a word σ ∈ T ∗ that
represents a finite sequence of events. An event log L ∈ B(T ∗ ) is a multiset of
traces1 . An alignment is represented by a two-row matrix where the top and
bottom rows represent moves on log and the model respectively. For example
given trace t1 t4 t2 t5 t8 and the model in Fig. 1(a), an example of alignment is:

                                    t ⊥ t4 t2 t5 ⊥ t8
                                  α= 1
                                    t1 t2 t4 ⊥ t5 t7 t8

    Let F ⊆ E represents a set of edges of a directed graph hV, E, `i, GF =
hVF , F i is the subgraph formed by by F if VF is the smallest set of nodes such
that GF is a subgraph. A node in VF is boundary with respect to GF if it is
connected to nodes in VF and in V − VF , otherwise it is interior. A boundary
node u of GF is an entry node if no incoming edge of u belongs to F or if all
outgoing edges of u belong to F . A boundary node v of GF is an exit node of
GF if no outgoing edge of v belongs to F or if all incoming edges of v belong to
F . GF with one entry and one exit node is called SESE. If a SESE contains only
one edge it is called trivial. A SESE of G is called canonical if it does not overlap
with any other SESEs of G, but it can be nested or disjoint with other SESEs.
For example in Fig. 1(b) all SESEs are canonical, S2 and S4 are nested, S3 and
S2 are disjoint. A WF-net can be viewed as a Workflow graph if no distinctions
are made between its nodes. WF-graph of Fig. 1(a) is presented in Fig. 1(b).
Let G be a graph, then its Refined Process Structure Tree (RPST) is the set
of all canonical SESEs of G. Because canonical fragments are either nested or
disjoint, they form a hierarchy. In a typical RPST, the leaves are trivial SESE
and the root is the whole graph. Fig. 1(c) is the RPST of WF-graph in Fig. 1(b),
1
    B(A) denotes the set of all multisets of the set A.




                                            52
S1 which is the entire graph is at root and leaves are trivial SESEs which only
contain one edge.




    Fig. 1: (a) WF-net,(b) Workflow graph,(c) RPST, (d) Reduced WF-net




4   Overall Framework
Given a process model N , represented by a Petri net, and σ as observed behavior,
the strategy of this paper is sketched in Fig. 2. We now provide descriptions of
each stage.




     Fig. 2: Overall framework for boosting the computation of alignments




                                       53
 – Model Reduction: N will be reduced based on the notion of indication rela-
   tion which results in Nr . It contains some abstract events representing the
   indicators of certain indicated sets of transitions. Section 5.1 explains it in
   detail.
 – Log Reduction: Using the indication relations computed in the model, σ
   is projected into the remaining labels in Nr , resulting in σr . Section 5.2
   describes this step.
 – Computing Alignment: Given Nr and σr , approaches like [1] and [9] can
   be applied to compute alignments. At this point because both Nr and σr
   contain abstract events, the computed alignment will have them as well. We
   call it macro-alignment.
 – Alignment Expansion: For each abstract element of a macro-alignment, the
   modeled and observed indications are confronted. Needleman–Wunsch al-
   gorithm [6] is adapted to compute optimal alignments. Section 6 will be
   centered on this.


5     Reduction of Model and Observed Behavior

5.1   The Indication Relation

Let us consider the model in Fig. 1(a). For any sequence of the model, whenever
transition t4 fires it is clear that transitions t1 , t3 , and t2 have fired as well.
Formally:

Definition 1 (Indication Relation). Let N = hP, T, Fi, ∀t ∈ T , indication
is defined as a function, I(t) where, I : T → [P (T )+ ]+ I(t) such that for any
sequence σ ∈ L(N ), if t ∈ σ then I(T ) ∈ σ. If I(t) = ω1 ω2 ...ωn , then elements
of ωm precede the elements of ωn in σ for 1 ≤ m < n. I(t) is called linear if it
contains only singleton sets, i.e. ∀ωi ∈ I(t), |ωi | = 1 otherwise it is non-linear.

For example in Fig. 1(a), I(t4 ) =
{t1 }{{t2 }, {t3 }}{t4 } (non-linear) and
I(t8 ) = {t7 }{t8 } (linear). SESEs are
potential candidates for identifying
indication relations inside a WF-net:
the exit node of a SESE is the po-
tential indicator of the nodes inside
the SESE. Since entry/exit nodes
of a SESE can be either place or
transitions, SESEs are categorized as Fig. 3: Linear SESEs and their reduction.
(P, P ), (P, T ), (T, P ) or (T, T ). In
case the SESE is linear, indication relations can be extracted easily and the
corresponding SESE is reduced (see Fig.3).
    Non-linear cases are decomposed into linear ones so that indication relations
can be computed directly on the linear components extracted. After that, the
indication relation of the corresponding linear SESEs are computed and they are




                                        54
reduced as well. This procedure should be done with caution to avoid reaching
a deadlock situation. Hence a post verification must be done after reduction of
these linear parts. Informally, the verification is only needed for particular type of
linear SESEs ((T, T )), and consists on validating the property of the SESE after
the reduction. Notice the verification is necessary in these cases because, non-
linear SESEs may contain linear indications at nested level, but which cannot
be extracted due to choice or loop constructs (see See Fig. 4).




Fig. 4: (a) Non-Linear (T,T), (b) Non-Linear (P,T). Here indication relations
cannot be computed at the whole SESE level, i.e., in (a) t5 does not indicate
neither t2 nor t4 .
    The reduction schema is
depicted in Fig. 5. From the
RPST, a top-down approach
is applied that searches for
indication-based reductions that
do preserve the language of
the initial model, once the
net is expanded back, i.e., the
language of the model must
be preserved after reduction.
As mentioned above, the re-
duction of non-linear SESEs
must be done alongside by a
post verification; for instance,
Fig. 6 shows that in spite
of the indication arising from     Fig. 5: Schema for reduction of a WF-net.
SESE S2 , the net cannot be
reduced without changing the language. To put it another way, this reduction
will cause a deadlock in the reduced model, and hence must be avoided. Look-
ing at the reduced result in Fig. 6, transition t5 (N ew) will never fire because
after the reduction. Notice that the reduction can be applied more than once till
saturation (hence the arc back from the node “Reduced WF-net” to the node
“WF-net”).
   Fig. 7 shows an example (for the sake of simplicity only linear SESEs are
shown). Obviously, SESE S2 is inherently a linear SESE but the rest come from
the decomposition of non-linear SESEs. The reduction schema is as follows: Since
S2 is inherently a linear SESE, hence it can be reduced easily according to Fig.
3 without any post verification. The rest of linear SESEs also will be reduced




                                         55
      Fig. 6: Incorrect indication-based reduction: a deadlock is introduced.




accordingly and post verification will be done after each reduction to check that
no deadlock arises. One can see all reductions will be pass the verification, except
for S7 , whose reduction induces a deadlock. Applying the reduction once, results
in Fig. 7(b). As mentioned earlier, the reduction can be applied more than once
until no reduction can be made. Fig. 7(c) is the reduction of the model in Fig.
7(b) and it is clear that no more reduction can be made from this model.




5.2    Reduction of Observed Behavior




Given a reduced model Nr and σ, we show how to produce σr . We will use the
reduced model in Fig. 7(b) and the trace σ1 = t1 t5 t3 t11 t10 t21 t6 t2 t7 t16 t25 t19 t20 t26 .
The indication of t5(N ew) in Fig. 7(b) which is linear, equals to {t5 }{t15 }. So the
observed indication for this abstract node is σ1↓I(t           )
                                                                 = t5 . After comput-
                                                              5(new)
ing the observed indication the reduced trace is t1 t5(new) t3 t11 t10 t21 t6 t2 t7 t16 t25
t19 t20 t26 . For t17(N ew) , I(t17(N ew) ) = {t3 }{{t10 }, {t11 }}{t17 }, which is non-linear
and merged of two linear indications, I1 (t17(N ew) )={t3 }{t10 }{t17 } andI2 (t17(N ew) )=
{t3 }{t11 }{t17 }. So the projection must be done for each linear indication sepa-
rately, σ1↓I (t          )
                           = t3 t10 and σ1↓I (t        )
                                                          = t3 t11 , removing transitions t3 ,
             1   17(N ew)                    2   17(N ew)
t10 , t11 and t17 from the current trace (notice that t17 does not appear originally,
hence it is not projected). Finally, we need to insert t17(N ew) into the reduced
trace; it will be inserted at the position of t10 , because the end transition of
the abstract node, i.e. t17 did not happened in σ, and t10 happened last in σ.
Therefore the reduced trace so far is t1 t5(new) t17(new) t21 t6 t2 t7 t16 t25 t19 t20 t26 . By
applying this process for the rest of abstract nodes (t16(N ew) , t22(N ew) ), we reach
σr = t1 t5(new) t17(new) t21 t16(N ew) t22(N ew) t26 .




                                                 56
                                  (a)




                                                                          (c)

                          (b)

    Fig. 7: (a) Process model, (b) One-time reduced (c) Two-times reduced.


6   Expansion through Local Optimal Indication
    Alignments
After reducing a given process model and corresponding observed behavior,
we can use current methods for computing alignments [1,9] to align Nr and
σr , deriving αr . For example the following is the macro alignment of σ1r =
t1 t5(new) t17(new) t21 t16(N ew) t22(N ew) t26 and the model in Fig. 7(b).
                    t t          t         t ⊥       ⊥ t16(N ew) t22(N ew) t26
               αr = 1 5(N ew) 17(N ew) 21
                    t1     ⊥ t17(N ew) t21 t24 t5(N ew) t16(N ew) t22(N ew) t26




                                        57
    When mapped to linear indications, indication of an abstract node and the
corresponding observed indication are both sequence of events; hence for each
linear combination of modeled/observed indication, we can adapt the dynamic
programming approach from [6] (used in bioinformatics) to align two sequences.
As example, we use indication of t17(N ew) and its observed indication computed
in the previous section.




    To achieve this goal, we create a table for each linear indication, where the
first row and column are filled by observed and abstract node indications re-
spectively, as depicted in Table 1(a), 1(b). The second row and second column
are initialized with numbers starting from 0,-1,-2,..., they are depicted in yellow
color. The task then is to fill the remaining cells as follows:
SIM (ti , tj )= M AX(SIM (ti−1 , tj−1 ) + s(ti , tj ), SIM (ti−1 , tj ) − 1, SIM (ti , tj−1 ) − 1)
Where SIM (ti , tj ) represents the similarity score between ti and tj . s(ti , tj ) is
the substitution score for aligning ti and tj , it is 0 when they are equal and −1
otherwise.
     The final step in the algorithm is the trace back for the best alignment. In
the above mentioned example, one can see the bottom right hand corner in for
example Table 1, score as -1. The important point to be noted here is that there
may be two or more alignments possible between the two example sequences. The
current cell with value -1 has immediate predecessor, where the maximum score
obtained is diagonally located and its value is 0. If there are two or more values
which points back, suggests that there can be two or more possible alignments.
By continuing the trace back step by the above defined method, one would reach
to the 0th row, 0th column. Following the above described steps, alignment of
two sequences can be found.
     Alignments can be represented by a sequence of paired elements, for example
α1 = (t3 , t3 )(t11 , t11 )(⊥, t17 ), α2 = (t3 , t3 )(t10 , t10 )(⊥, t17 ) and final alignment
which represent the non-linear indication is α = (t3 , t3 ){(t11 , t11 ), (t10 , t10 )}(⊥
, t17 ). This information is booked for each abstract node.
     After computing local alignments for abstract nodes, we can use them to
expand corresponding abstract nodes in a given αr . The policy of expansion
depends on whether the abstract node is in synchronous or asynchronous move.
     In αr , t17(N ew) is in a synchronous move so we can expand it by its local
alignment, which results in:
                  t t          t t t ⊥ t21 ⊥               ⊥ t16(N ew) t22(N ew) t26
              α= 1 5(N ew) 3 11 10
                  t1     ⊥ t3 t11 t10 t17 t21 t24 t5(N ew) t16(N ew) t22(N ew) t26




                                               58
The same story also happens for t16(N ew) and t22(N ew) , which results in:
         t t        t t t ⊥ t21 ⊥            ⊥ t6 t2 t7 ⊥ ⊥ t16 t25 t19 t20 ⊥ t26
     α= 1 5(N ew) 3 11 10
         t1   ⊥ t3 t11 t10 t17 t21 t24 t5(N ew) ⊥ t2 t7 t6 t8 t16 t25 t19 t20 t22 t26
   On the other hand t5(N ew) in αr is a asynchronous move both on the model
and observed trace. The policy of expansion is to expand move on log and move
on model independently. To put it in another way, move on log will be expanded
using observed indication and move on model will be expanded using the abstract
node’ indication, which results:
       t t t t t ⊥ t21 ⊥ ⊥ ⊥ t6 t2 t7 ⊥ ⊥ t16 t25 t19 t20 ⊥ t26
   α= 1 5 3 11 10
       t1 ⊥ t3 t11 t10 t17 t21 t24 t15 t5 ⊥ t2 t7 t6 t8 t16 t25 t19 t20 t22 t26


7   Experiments

The technique presented in this paper has been implemented in Python as a pro-
totype tool. The tool has been evaluated over different family of examples, along-
side with the state of the art techniques for computing alignments [9] (ILP.R),
[1] (A∗ ). We used benchmark datasets from [9], [5], and a new dataset.
Reduction of Models. Table 2 provides the results of one-time reduction by ap-
plying the proposed method to benchmark datasets. Significant reductions are
found often. Obviously one can see that the results of reduction is more repre-
sentative for models without loops or contain small loops, like (Banktransfer ).




Quality of Alignments. Since the alignment technique ILP.R may be approxi-
mate, Table 3 provides an overview of how many of the computed alignments




                                        59
can be replayed for ILP.R method when combined with the technique of this
paper2 .
Comparing with Original Alignments. Table 4 reports the evaluation of the qual-




ity of the results for both approaches [1], [9] with and without applying the
technique of this paper. Columns ED/Jaccard report the edit/Jaccard distance
between the sequences computed, while MSE columns reports the mean square
root error between the corresponding fitness values. Edit distances are often
large, but interestingly this has no impact on the fitness, since when expanding
abstract nodes although the final position may differ, the model still can replay
the obtained sequences very often.
Memory Usage. The memory usage of computing alignments using [1], is reduced
significantly. For large models, prDm6, prFm6, prGm6, M6 , M10 , it can only
compute alignments if applied in combination of the technique of this paper.
Computation Time Comparison. Fig. 8(a)-(b) report computation times for BPM-
2013 and other benchmark datasets respectively. It is evident that A∗ approach
combined with the proposed method is significantly faster than the other ap-
proach in nearly all datasets except (prGm6, prDm6, M6 , M10 ). Still A∗ ap-
proach cannot compute alignments for models M6 and M10 even after applying
the presented technique, and in that case the combination of ILP.R with the
presented technique is the best choice.

8     Conclusion and Future Work
We have presented a technique that can be used to significantly alleviate the
complexity of computing alignments. The technique uses the indication relation
2
    Expanded alignments provided by A∗ were replayed 100% for all datasets




                                        60
                                             (a)




                                             (b)

    Fig. 8: Computation time for (a) BPM2013 and (b) synthetic datasets.


to abstract unimportant parts of a process model so that global computation of
alignments focus on a reduced instance. The reduced part of computed align-
ments then will be expanded to represent local deviations as well. Experiments
are provided that witness the capability of the technique when used in combi-
nation with state-of-the-art approaches for alignment computation. Future work
will be devoted to apply the technique on more unstructured inputs.

References
 1. Arya Adriansyah. Aligning observed and modeled behavior. PhD thesis, Technische
    Universiteit Eindhoven, 2014.
 2. Arya Adriansyah, Jorge Munoz-Gama, Josep Carmona, Boudewijn F. van Dongen,
    and Wil M. P. van der Aalst. Measuring precision of modeled behavior. Inf. Syst.
    E-Business Management, 13(1):37–67, 2015.
 3. Sandie Balaguer, Thomas Chatain, and Stefan Haar. Building occurrence nets
    from reveals relations. Fundam. Inform., 123(3):245–272, 2013.




                                        61
 4. Stefan Haar. Unfold and Cover: Qualitative Diagnosability for Petri Nets. In
    Proceedings of the 46th IEEE Conference on Decision and Control (CDC’07), pages
    1886–1891, New Orleans, LA, USA, United States, 2007. IEEE Control System
    Society.
 5. Jorge Munoz-Gama, Josep Carmona, and Wil M. P. Van Der Aalst. Single-entry
    single-exit decomposed conformance checking. Inf. Syst., 46:102–122, December
    2014.
 6. Saul B. Needleman and Christian D. Wunsch. A general method applicable to
    the search for similarities in the amino acid sequence of two proteins. Journal of
    Molecular Biology, 48(3):443 – 453, 1970.
 7. Artem Polyvyanyy, Jussi Vanhatalo, and Hagen Völzer. Simplified computation
    and generalization of the refined process structure tree. In 7th International Con-
    ference on Web Services and Formal Methods, WS-FM’10, pages 25–41, Berlin,
    Heidelberg, 2011.
 8. Robert E. Tarjan and Jacobo Valdes. Prime subprogram parsing of a program.
    In Proceedings of the 7th ACM SIGPLAN-SIGACT Symposium on Principles of
    Programming Languages, POPL ’80, pages 95–105, New York, NY, USA, 1980.
    ACM.
 9. Farbod Taymouri and Josep Carmona. A recursive paradigm for aligning observed
    behavior of large structured process models. In 14th International Conference of
    Business Process Management (BPM), Rio de Janeiro, Brazil, September 18 - 22,
    2016.
10. Wil M. P. van der Aalst. Process Mining - Discovery, Conformance and Enhance-
    ment of Business Processes. Springer, 2011.
11. Jussi Vanhatalo, Hagen Völzer, and Jana Koehler. The refined process structure
    tree. In Proceedings of the 6th International Conference on Business Process Man-
    agement, BPM ’08, pages 100–115, Berlin, Heidelberg, 2008. Springer-Verlag.
12. H. M. W. Verbeek and W. M. P. van der Aalst. Merging Alignments for Decomposed
    Replay, pages 219–239. Springer International Publishing, Cham, 2016.




                                         62