<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Decision Diagrams for Petri Nets: which Variable Ordering?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elvio Gilberto Amparore</string-name>
          <email>amparore@di.unito.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Susanna Donatelli</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Beccuti</string-name>
          <email>beccuti@di.unito.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giulio Garbi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrew Miner</string-name>
          <email>asminer@iastate.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Iowa State University</institution>
          ,
          <addr-line>Iowa</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università di Torino</institution>
          ,
          <addr-line>Dipartimento di Informatica</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <fpage>31</fpage>
      <lpage>50</lpage>
      <abstract>
        <p>The efficacy of decision diagram techniques for state space generation is known to be heavily dependent on the variable order. Ordering can be done a-priori (static) or during the state space generation (dynamic). We focus our attention on static ordering techniques. Many static decision diagram variable ordering techniques exist, but it is hard to choose which method to use, since only fragmented information about their performance is available. In the work reported in this paper we used the models of the Model Checking Contest 2016 edition to provide an extensive comparison of 14 different algorithms, in order to better understand their efficacy. Comparison is based on the size of the decision diagram of the generated state space. The state space is generated using the Saturation method provided by the Meddly library.</p>
      </abstract>
      <kwd-group>
        <kwd>decision diagrams</kwd>
        <kwd>static variable ordering</kwd>
        <kwd>heuristic optimization</kwd>
        <kwd>saturation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Binary Decision diagram (BDD) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] is a well-known data structure that was
extensively used in industrial hardware verification thanks to its ability of encoding
complex boolean functions on very large domains. In the context of discrete event
dynamic systems in general, and of Petri nets in particular, BDDs and its
extensions (e.g. Multi-way Decision Diagram -MDD) were proposed to efficiently
generate and store the state space of complex systems. Indeed, symbolic state
space generation techniques exploit Decision Diagrams (DDs) because they
allow to encode and manipulate entire sets of states at once, instead of storing
and exploring each state explicitly.
      </p>
      <p>
        The size of DD representation is known to be strongly dependent on
variable orders: a good ordering can significantly change the memory consumption
and the execution time needed to generate and encode the state space of a
system. Unfortunately finding the optimal variable ordering is known to be
NPcomplete [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Therefore, efficient DD generation is usually reliant on various
heuristics for the selection of (sub)optimal orderings. In this paper we will only
consider static variable ordering, i.e. once the variable ordering l is selected, the
MDD construction starts without the possibility of changing l. In the
literature several papers were published to study the topic of variable ordering. An
overview of these works can be found in [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], and in the recent work in [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. In
particular the latter work considers a new set of variable ordering algorithms,
based on Bandwidth-reduction methods [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], and observes that they can be
successfully applied for variable ordering. We also consider the work published
in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] (based on the ideas in [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]), which are state-of-the-art variable ordering
methods specialized for Petri nets.
      </p>
      <p>The motivation of this work was to understand how these different algorithms
for variable orderings behave. Also, we wanted to investigate whether the
availability of structural information on the Petri net model could make a difference.
As far as we know there is no extensive comparison of these specific methods.</p>
      <p>In particular we have addressed the following research objectives:
1. Build an environment (a benchmark ) in which different algorithms can be
checked on a vast number of models.
2. Investigate whether structural information like P-semiflows can be exploited
to define better algorithms for variable orderings.
3. What are the right metrics to compare variable ordering algorithms in the
most fair manner.</p>
      <p>
        To achieve these objectives we have built a benchmark in which 14 different
algorithms for variable orderings have been implemented and compared on state
space generation of the Petri nets taken from the models of the Model Checking
context (both colored and uncolored), 2016 edition [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. The implementation
is part of RGMEDD [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], the model-checker of GreatSPN [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and uses MDD
saturation [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The ordering algorithms are either taken from the literature
(both in their basic form and with a few new variations) or they were already
included in GreatSPN.
      </p>
      <p>Figure 1, left, depicts the benchmark workflow. Given a net system S =
(N ; m0) all ordering algorithms in A are run (box 1), then the reachability set
RSl of the system is computed for each ordering l 2 L (box 2) and algorithms are
ranked according to some MDD metrics MM(RSl), (box 3). The best algorithm
a is then the best algorithm for solving the PN system S = (N ; m0) (box 4)
and its state space RSl could be the one used to check properties.</p>
      <p>This workflow allows to: 1) provide indications on the best performing
algorithm for a given model and 2) compare the algorithms in A on a large set of
models to possibly identify the algorithm with the best average performances.
The problem of defining a ranking among algorithms (or of identifying the “best”
algorithm) is non-trivial and will be explored in Section 3.</p>
      <p>Figure 1, right, shows a high level view of the approach used to compare
variable ordering algorithms in the benchmark. Columns represents algorithms,
and rows represents model instances, that is to say a Petri net model with an
associated initial marking. A square in position (j; k) represents the state space
generation for the jth model instance done by GreatSPN using the variable</p>
    </sec>
    <sec id="sec-2">
      <title>System hN , m0i</title>
    </sec>
    <sec id="sec-3">
      <title>Run variable ordering (1)</title>
      <p>algorithm a 2 A</p>
      <p>L = {set of
orderings}
8 l 2 L: reorder V (2)
according to l and
compute MDD RSl.</p>
      <p>{RSl | l 2 L}
Rank algorithms A (3)
according to MDD
metrics of {RSl}.</p>
      <p>a⇤ 2 A
a⇤ is best algorithm (4)
for hN , m0i. Assign
score points to a.</p>
      <p>.
s
e
Models: leodMitscannza1 a2 a3 a}4|
model mk ( in
instances in 1</p>
      <p>Static Variable</p>
      <p>Ordering Algorithms
in 2
.
.
.</p>
      <p>i6
8&gt; i5
model m2 &lt;&gt; i4
instances &gt;&gt;: i3
model m1 (i2
instances i1</p>
      <p>ah 1 a{h
· · · 9= tMhoatdealriennstoatnscoelsved
· · · ; ibnyAan.y algorithm
... ... ... ... · · · ... ... &gt;9&gt;&gt;&gt;&gt;= tMhoatdealrienssotalvnecdes
·· ·· ·· &gt;&gt;;&gt;&gt;&gt; ablygosroimthems in A.
·· ·· ·· &gt;9&gt;&gt;&gt;&gt;&gt;&gt;= tMhoatdealrienssotalvnecdes
·· ·· ·· &gt;&gt;&gt;&gt;&gt;&gt;&gt;; ibnyAal.l algorithms
= instance solved by an.</p>
      <p>= instance not solved by an.
ordering computed by algorithm ak. A black square indicates that the state
space was generated within the given time and memory limits.</p>
      <p>In the analysis of the results from the benchmark we shall distinguish among
model instances for which no variable ordering was good enough to allow
GreatSPN to generate the state space (only white squares on the model instance row,
as for the first two rows in the figure), model instances for which at least one
variable ordering was good enough to generate the state space (at least one black
square in the row), and model instances in which GreatSPN generates the state
space with all variable orderings (all black squares in the row), that we shall
consider “easy” instances.</p>
      <p>In the analysis it is also important to distinguish whether we evaluate
ordering algorithms w.r.t. all possible instances or on a representative set of them.
Figure 1, right, highlights that instances are not independent, since they are
often generated from the same “model” that is to say the same Petri net N by
varying the initial marking m0 or some other parameter (like the cardinality of
the color classes). As we shall see in the experimental part, collecting measures
over all instances, in which all instances have the same weight, may lead to a
distortion of the observed behaviour, since the number of instances per model
can differ significantly. A measure “per model” is therefore also considered.</p>
      <p>
        This work could not have been possible without the models made available
by the Model Checking Contest, the functions of the Meddly MDD library and
the GreatSPN framework. We shall now review them in the following.
Model Checking Contest. The Model Checking Contest[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] is a yearly
scientific event whose aim is to provide a comparison among the different available
verification tools. The 2016 edition employed a set of 665 PNML instances
generated from 65 (un)colored models, provided by the scientific community. The
participating tools are compared on several examination goals, i.e. state space,
reachability, LTL and CTL formulas. The MCC team has designed a score
system to evaluate tools that we shall employ as one of the considered benchmark
metrics for evaluating the algorithms, as evaluating the orderings can be reduced
to evaluating the same tool, GreatSPN, in as many variations as the number of
ordering algorithms considered.
      </p>
      <p>
        Meddly library. Meddly (Multi-terminal and Edge-valued Decision Diagram
LibrarY) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] is an open-source library implementation of Binary Decision Diagrams
(BDDs) and several variants, including Multi-way Decision Diagrams (MDDs,
implemented “natively”) and Matrix Diagrams (MxDs, implemented as MDDs
with an identity reduction rule). Users can construct one or more forests
(collections of nodes) over the same or different domains (collections of variables).
Several “apply” operations are implemented, including customized and efficient
relational product operations and saturation [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for generating the set of states
(as an MDD) reachable from an initial set of states according to a transition
relation (as an MxD). Saturation may be invoked either with an already known
(“pre-generated”) transition relation, or with a transition relation that is built
“on the fly” during saturation [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], although this is currently a prototype
implementation. The transition relation may be specified as a single monolithic
relation that is then automatically split [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], or as a relation partitioned by
levels or by events [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], which is usually preferred since the relation for a single
Petri net transition tends to be small and easy to construct.
      </p>
      <p>
        GreatSPN framework. GreatSPN is a well-known collection of tools for the
design and analysis of Petri net models [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. The tools are aimed at the qualitative
and quantitative analysis of Generalized Stochastic Petri Net (GSPN) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and
Stochastic Symmetrical Net (SSN) through computation of structural
properties, state space generation and analysis, analytical computation of performance
indices, fluid approximation and diffusion approximation, symbolic CTL model
checking, all available through a common graphical user interface [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The state
space generation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] of GreatSPN is based on Meddly. In this paper we use the
collection of variable ordering heuristics implemented in GreatSPN. This
collection has been enlarged to include all the variable ordering algorithms described
in Section 2.
      </p>
      <p>The paper is organized as follows: Section 2 reviews the considered
algorithms; Section 3 describes the benchmark (models, scores and results); Section 4
considers a parameter estimation problem for optimizing one of the best
methods found (Sloan) and Section 5 concludes the paper outlining possible future
directions of work.
2</p>
      <p>The set A of variable ordering algorithms
In this section we briefly review the algorithms considered by the benchmark.
Although our target model is that of Petri nets, we describe the algorithms in a
more general form (as some of them were not defined specifically for Petri nets).
We therefore consider the following high level description of the model.</p>
      <p>
        Let V be the set of variables, that translates directly to MDD levels. Let
E be the set of events in the model. Events are connected to input and output
variables. Let V in(e) and V out(e) be the sets of input and output variables
of event e, respectively. Let Ein(v) and Eout(v) be the set of events to which
variable v participates as an input or as an output variable, respectively. For
some models, structural information is known in the form of disjoint variables
partitions named structural units. Let be the set of structural units. With (v)
we denote the unit of variable v. Let V ( ) be the set of vertices in unit 2 . In
this paper we consider three different types of structural unit sets. Let PSF be
the set of units corresponding to the P-semiflows of the net, obtained ignoring the
place multiplicity. On some models, structural units are known because they are
obtained from the composition of smaller parts. We mainly refer to [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] for the
concept of Nested Units (NU). Let NU be this set of structural units, which is
defined only for a subset of models. Finally, structural units can be derived using
a clustering algorithm. Let Cl be such set of units. We will discuss clustering
in section 2.5.
      </p>
      <p>Following this criteria, we subdivide the set of algorithms A into: The set
AGen, that do not use any structural information; The set APSF that requires</p>
      <p>PSF; The set ANU that require NU. Since clustering can be computed on
almost any model, we consider method that use Cl as part of AGen.</p>
      <p>In our context, the set of MDD variables V corresponds to the place set of
the Petri net, and the set of events is the transition set of the Petri net. Let
l : V ! N be a variable order, i.e. an assignment of consecutive integer values
to the variables V .
2.1</p>
      <sec id="sec-3-1">
        <title>Force-based orderings</title>
        <p>
          The FORCE heuristic, introduced in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], is a n-dimensional graph layering
technique based on the idea that variables form an hyper-graph, such that variables
connected by the same event are subject to an “attractive” force, while variables
not directly connected by an event are subject to a “repulsive” force. It is a
simple method that can be summarized as follows:
        </p>
        <p>Algorithm 1 gives the general skeleton of the FORCE algorithm. One of the
most interesting part of this method is that it can be seen as a factory of variable
orders, that generates K different orders. The algorithm starts by shuffling the
variables set, then it iterates K times trying to achieve a convergence. The
version of FORCE that we tested uses K = 200. In our experience, FORCE does
not converge stably to a single permutation on most models, and it generates a
new permutation at every iteration.</p>
        <p>
          After having produced a candidate variable order l, a metric function f (l) !
R is used to estimate the quality of that order. The chosen order is then the one
that maximises (or minimises) the target metric function. In our benchmark we
tested the following metric functions:
– Point-Transition Spans : the PTS score is given by the sum of all distances
between the center of gravities cog e and their connected variables. The
formula is the following: PTS(l) = jEj1jV j Pe2E Pv2V (e) p(v) cog e .
– Normalized Event Span: the NES score [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] is based on the idea of measuring
the sum of the event spans of each event e. The event span spanl(e) of event
e for the variable order l is defined as the largest distance in l between every
pair of variables v; v0 2 V (e). The metric is then defined as a normalized
sum of these spans: NES(l) = jE1 j Pe2E spanjV(ej)+1 .
– Weighted Event Span: WES [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] is a modified version of NES where events
closer to the top of the variable order weigh more. This modification is
based on the assumption that the Saturation algorithm performs better when
most events are close to the bottom of the order. The formula of WESi is:
WESi(l) = jE1 j Pe2E spanjV(ej)+1 2 tjoVpj(e) i, with i = 1.
        </p>
        <p>In addition to the evaluation metrics, structural information of the model
can be used to establish additional center of gravity. We tested three variations
of the FORCE method, which are:
– FORCE-*: Events are used as centers of gravity, as described in Algorithm 1,
where * is one among PTS, NES or WES.
– FORCE-P: P-semiflows are used as center of gravities instead of the events.</p>
        <p>The method is only tested for those models that have P-semiflows. It uses
the WES metric to select between the K generated orderings.
– FORCE-NU: Structural units are used as center of gravities, along with the
events. This intuitively tries to keep together those variables that belong to
the same structural unit. Again, WES is used for the metric function.
The set A of algorithms considered in the benchmark includes: FORCE-PTS,
FORCE-NES and FORCE-WES in AGen; the method FORCE-P in APSF; and the
method FORCE-NU in ANU, for a total of 5 variations of this method.
2.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Bandwidth-reduction methods</title>
        <p>
          Bandwidth-reduction(BR) methods are a class of algorithms that try to permute
a sparse matrix into a band matrix, i.e. where most of the non-zero entries are
confined in a (hopefully) small band near the diagonal. It is known [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] that
reducing the event span in the variable order generally improves the compactness
of the generated MDD. Therefore, event span reduction can be seen as an
equivalent of a bandwidth reduction on a sparse matrix. A first connection between
bandwidth-reduction methods and variable ordering has been tested in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] and
in [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] on the model bipartite graph. In these works the considered BR
methods where: The Reverse Cuthill-Mckee [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]; The King algorithm [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]; and The
Sloan algorithm [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. The choice was motivated by their ready availability in the
Boost-C++ and in the ViennaCL library. In particular, Sloan, which is the
stateof-the-art method for bandwidth reduction, showed promising performances as a
variable ordering method. In addition, Sloan almost always outperforms [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] the
other two BR methods, but for completeness of our benchmark we have decided
to replicate the results. We concentrate our review on the Sloan method only,
due to its effectiveness and its parametric nature.
        </p>
        <p>
          The goal of the Sloan algorithm is to condensate the entries of a symmetric
square matrix A around its diagonal, thus reducing the matrix bandwidth and
profile [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. It works on symmetric matrices only, hence it is necessary to impose
some form of translation of the model graph into a form that is accepted by
the algorithm. The work in [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] adopts the symmetrization of the dependency
graph of the model, i.e. the input matrix A for the Sloan algorithm will have
(jV j + jEj) (jV j + jEj) entries. We follow instead a different approach. The size
of A is U , with jV j U jV j + jEj. Every event e generates entries in A:
when jV in(e)j jV out(e)j &lt; T , with T a threshold value, all the cross-product
of V in(e) vertices with the V out(e) vertices are nonzero entries in A. If instead
jV in(e)j jV out(e)j T , a pseudo-vertex ve is added, and all V in(e) fveg and
fveg V out(e) entries in A are set to be nonzero. Usually U will be equal to
V , or just slightly larger. This choice better represents the variable–variable
interaction matrix, while avoiding degenerate cases where highly connected event
could generate a dense matrix A. In our implementation, T is set to 100. The
matrix is finally made symmetric using: A0 = A + AT . As we shall see in
section 3, the computational cost of Sloan remains almost always bounded.
Algorithm 2 Pseudocode of the Sloan algorithm.
        </p>
        <p>Function Sloan:</p>
        <p>Select a vertex u of the graph.</p>
        <p>Select v as the most-distant vertex to u with a graph visit.</p>
        <p>Establish a gradient from 0 in v to d in u using a depth-first visit.</p>
        <p>Initialize visit frontier Q = fvg
repeat until Q is empty:</p>
        <p>Remove from the frontier Q the vertex v0 that minimizes P (v0).</p>
        <p>Add v0 to the variable ordering l.</p>
        <p>Add the unexplored adjacent vertices of v0 to Q.</p>
        <p>
          A second relevant characteristic of Sloan is its parametric priority function
P (v0), which guides variable selection in the greedy strategy. A very compact
pseudocode of Sloan is given in Algorithm 2. A more detailed one can be found
in [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. The method follows two phases. In the first phase it searches a
pseudodiameter of the A matrix graph, i.e. two vertices v; u that have an (almost)
maximal distance. Usually, an heuristic approach based on the construction of
the root level structure of the graph is employed. The method then performs a
visit, starting from v, exploring in sequence all vertices in the visit frontier Q
that maximize the priority function:
        </p>
        <p>P (v0) =</p>
        <p>W1 incr (v0) + W2 dist (v; v0)
where incr (v0) is the number of unexplored vertices adjacent to v0, dist (v; v0)
is the distance between the initial vertex v and v0, and W1 and W2 are two
integer weights. The weights control how Sloan prioritises the visit of the local
cluster (W1) and how much the selection should follow the gradient (W2). Since
the two weights control a linear combination of factors, in our analysis we shall
consider only the ratio WW21 . In Section 4 we will concentrate on the best selection
of this ratio. We use the name SLO, CM and KING to refer to the Sloan,
CuthillMckee and King algorithms in the AGen set. GreatSPN uses the Boost-C++
implementations of these methods.
2.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>P-semiflows chaining algorithm</title>
        <p>In this subsection we propose a new heuristic algorithm exploiting the PSF set
of structural units obtained by the P-semiflows computation. A P-semiflow is
a positive, integer, left annuler of the incidence matrix of a Petri net, and it
is known that, in any reachable marking, the sum of tokens in the net places,
weighted by the P-semi-flow coefficients, is constant and equal to the weighted
sum of the initial marking (P-invariant). Its main idea is to maintain the places
shared between two PSF units (i.e. P-semiflows) as close as possible in the
final MDD variable ordering, since their markings cannot vary arbitrarily. The
pseudo-code is reported in Algorithm 3. The algorithm takes as input the PSF
set and returns as output a variable ordering (stored in the ordered l). Initially,
the i unit sharing the highest number of places with another unit is removed
by PSF and saved in curr. All its places are added to l.</p>
        <p>Then the main loop runs until PSF becomes empty. The loop comprises the
following operations. The j unit sharing the highest number of places with curr
is selected. All the places of j in l, which are not currently C (i.e. the list of
currently discovered common places) are removed. The common places between
i and j not present in C are appended to l. Then the places present only in
j are added to l. After these steps, C is updated with the common places in i
and j , and j is removed by PSF. Finally curr becomes j , completing the
iteration. This algorithm is named P and belongs to the APSF set.
Algorithm 3 Pseudocode of the P-semiflows chaining algorithm.
Function P-semiflows( PSF):
l = ? is the ordered list of places.</p>
        <p>C = ? is the set of current discovered common places.</p>
        <p>Select a unit i 2 PSF s.t. maxfi;jg2j PSFj i \ j with i 6= j</p>
        <p>PSF = PSF n f ig
curr = i
Append V ( curr) to l
repeat until PSF is empty:</p>
        <p>Select a unit j 2 PSF s.t. maxj2j PSFj curr \ j
Remove (l \ V ( j)) n C to l
Append V ( curr \ j) n C to l
Append V ( j) n (C \ V ( curr)) to l
Add V ( curr \ j) to C
curr = j</p>
        <p>PSF = PSF n f jg
return l
2.4</p>
      </sec>
      <sec id="sec-3-4">
        <title>The Noack and the Tovchigrechko greedy heuristics algorithms</title>
        <p>
          The Noack [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] and the Tovchigrechko [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ] methods are greedy heuristics that
build up the variable order sequence by picking, at every iteration, the variable
that minimizes an objective function. A detailed description can be found in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
A pseudo-code is given in Algorithm 4.
        </p>
        <p>Algorithm 4 Pseudocode of the Noack/Tovchigrechko heuristics.
Function NOACK-TOV:</p>
        <p>S = ? is the set of already selected places.
for i from 1 to jV j:
compute weight W (v) = f (v; S) for each v 62 S.
find v that maximizes W (v).
l(i) = v.</p>
        <p>S S [ fvg.
return the variable order l.</p>
        <p>The main difference between the Noack and the Tovchigrechko methods is
the weight function f (v; S), defined as:
fNoack(v; S) =
g1(e) + z1(e)</p>
        <p>+
fTov(v; S) =</p>
        <p>g1(e) +</p>
        <p>X
e2Eout(v)
k1(e)^k2(e)</p>
        <p>X
e2Eout(v)
k1(e)</p>
        <p>X
e2Eout(v)
k2(e)</p>
        <p>X</p>
        <p>g2(e) + c2(e)
c1(e) +
e2Ein(v)
k1(e)^k2(e)</p>
        <p>X g2(e) +
where the sub-terms are defined as:
g1(e) = max 0:j1V; ijnS(\e)Vj in(e)j ;
c1(e) = max 0:1j;V2oujSt(\e)Vj out(e)j ;
z1(e) = 2 jS\V out(e)j ;
jV out(e)j
g2(e) = 1+jS\V in(e)j</p>
        <p>jV in(e)j
max 0:2; 2 jS\V out(e)j
c2(e) = jV out(e)j
k1(e) = jV in(e)j &gt; 0;
k2(e) = jV out(e)j &gt; 0</p>
        <p>Few technical information is known about the criteria that were followed for
the definition of the fNoack and fTov functions. An important characteristic is
that both functions have different criteria for input and output event conditions,
i.e. they do not work on the symmetrized event relation, like the Sloan method.</p>
        <p>
          The set A of algorithms considered in the benchmark includes four variations
of these two algorithms: the variants NOACK and TOV are the standard
implementations, as described in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. In addition, we tested whether these methods could
benefit from an additional weight function that favours structural units. The
two methods NOACK-NU and TOV-NU employ this technique. In this case, function
f (v; S) is modified to add a weight of 0:75 jS \ V ( (v))j to variable v. In our
experiments we tried various values for the weight, but we generally observed
that these modified methods do not benefit from this additional information.
2.5
        </p>
      </sec>
      <sec id="sec-3-5">
        <title>Markov Clustering heuristic</title>
        <p>
          The heuristic MCL is based on the idea of exploring the effectiveness of clustering
algorithms to improve variable order technique. The hypothesis is that in some
models, it could be beneficial to first group places that belong to connected
clusters. For our tests we selected the Markov Cluster algorithm [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ]. The method
works as a modified version of Sloan, were clusters are first ordered according
to their average gradient, and then places belonging to the same clusters will
appear consecutively on the variable ordering, following the cluster orders. This
method is named MCL and belongs to the AGen set.
3
        </p>
        <sec id="sec-3-5-1">
          <title>The Benchmark</title>
          <p>
            The considered model instances are that of the Model Checking Contest, 2016
edition [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ], which consists of 665 PNML models. We discarded 257 instances
that our tool was not capable to solve in the imposed time and memory limits,
because either the net was too big or the RS MDD was too large under any
considered ordering. Thus, we considered for the benchmark the set I made of
386 instances, belonging to a set M of 62 models. These 386 instances run for
the 14 tested algorithms for 75 minutes, with 16GB of memory and a decision
diagram cache of 226 entries. In the 386 instances of I two sub-groups are
identified: The set IPSF I of instances for which P-semiflows are available, with
328 instances generated from a subset MPSF of 55 models; The set INU I of
instances for which nested units are available, with 65 instances generated from
a subset MNU of 15 models.
          </p>
          <p>
            The overall tests were performed on OCCAM [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ] (Open Computing Cluster
for Advanced data Manipulation), a multi-purpose flexible HPC cluster designed
and maintained by a collaboration between the University of Torino and the
Torino branch of the National Institute for Nuclear Physics. OCCAM counts
slightly more than 1100 cpu cores including three different types of computing
nodes: standard Xeon E5 dual-socket nodes, large Xeon E5 quad-sockets nodes
with 768 GB RAM, and multi-GPU NVIDIA nodes. Our experiments took 221
hours of computation using 3 standard Xeon E5 dual-socket nodes.
          </p>
          <p>Scores. Typically, the most important parameter that measures the performance
of variable ordering is the MDD peak size, measured in nodes. The peak size
represents the maximum MDD size reached during the computation, and it therefore
represents the memory requirement. It is also directly related to the time cost
of building the MDD. For these reasons we preferred to use the peak size alone
rather than a weighted measure of time, memory and peak size, that would make
the result interpretation more complex. The peak size is, however, a quantity
that is strictly related to the model instance. Different instances will have
unrelated peak sizes, often with different magnitudes. To treat instances in a balanced
way, some form of normalized score is needed. We consider three different score
functions: for all of them the score of an algorithm is first normalized against
the other algorithms on the same instance, which gives a score per instance,
and then summed over all instances. Let i be an instance, solved by algorithms
A = fa1; : : : ; amg with peak nodes Pi = fpa1 (i); : : : ; pam (i)g. The scores of an
algorithm a for an instance i are:
– The Mean Standard Score of instance i is defined as: MSSa(i) = pa(i)A(i)A(i) ,
where A(i) and A(i) are the mean and standard deviations for instance
summed over the all algorithms that solve instance i.
– The Normalized Score for instance i is defined as: NSa(i) = 1 minpfap(2i)Pig ,</p>
          <p>which just rescales the peak nodes taking the minimum as the scaling factor.
– The Model Checking Contest score1 for instance i defined as: MCCa(i) = 48</p>
          <p>if a terminates on i in the given time bound, plus 24 if pa(i) = minfp 2 Pig.</p>
          <p>The final scores used for ranking algorithms over a set I0
as the sum over I0 of the scores per instance:
I is then determined
–– TThhee NMoeramn aSlitzaenddaSrcdorSecoofrealogfoariltghomritahm:NaS:aM=SSjI1a0j=P1jiI120jI0PNiS2aI(0iM)SSa(i)
– The Model Checking Contest score of a: MCCa = jI0j Pi2I0 MCCa(i)</p>
          <p>MSS requires a certain amount of samples to be significant. Therefore, we
apply it only for those model instances were all our tested algorithms terminated
in the time bound. For those instances where not all algorithm terminated, we
apply NS. We decided to test the MCC score to check if it is a good or a biased
metric, when compared to the standard score.
1 We actually use a simplified version where answer correctness is not considered.</p>
          <p>Table 1 describes the general summary of the benchmark results. For each
algorithm, we report again its requirement class (AGen; APSF; ANU). The table
reports the number of instances where: the algorithm is applied, the algorithm
finishes in the time and memory bounds, the number of times the algorithm finds
the best ordering among the others, and the number of times the algorithm is
the only one capable of solving an instance. The last four columns report: the
NS score on all instances NSa; the MSS score on completed instances (MSSa);
the NS score on completed instances (NSa); and the MCCa score per instance.</p>
          <p>Remember that for NS and MSS, a lower score is better, while for MCC a higher
score is better.</p>
          <p>From the table, it emerges that TOV and SLO are the methods that perform
better, with a clear margin. Other methods, like FORCE-P and FORCE-NU have
the worst average behaviour, even if they perform well when they are capable
of solving an instance. Considering the column of “optimal” instances, some
methods seem to perform well, like CM, but as we shall see later in this section, this
is a bias caused by the uneven number of instances per model (i.e. some models
have more instances than others). To our surprise, both MCL and P methods
show only a mediocre average performance even if there is a certain number of
instances where they perform particularly well.</p>
          <p>In general, most instances are solved by more than one algorithm. In the
rest of the section we go into model details, first considering the performance of
the algorithms, and then by ranking the methods considering either instances(I,
IPSF, INU) or models(M, MPSF, MNU).</p>
          <p>Efficiency of variable ordering algorithms. Table 2 reports the times required
to generate the variable ordering. The table reveals several problems and
inefficiencies of some of these methods in our implementation. In particular, our
implementation of the Noack and Tovchigrechko methods are simple prototypes,
and have at least the cost of O(jV j2). The P algorithm also shows a severe
performance problem, with a cost that does not match its theoretical complexity.</p>
          <p>Surprisingly, the cost of Sloan is quite high. The library documentation states
that its cost is O(log(m) jEj) with m = maxfdegree(v) j v 2 V g, but the
numerical analysis seems to suggest that the library implementation could have some
hidden inefficiency.
Figure 2 shows the benchmark results, separated by instance class and algorithm
class. The plots on the left (1, 3, and 5) report the results on the instances
that are solved by all algorithms in the given time and memory limits (“Easy
instances” hereafter), while those on the right report the results for all instances
solved by at least one algorithm (“All instances” hereafter). In the left plots we
report the three tested metrics, while on the right plots we discard the MSS,
since the available samples for each instance may vary and could be too low for
the Gaussian assumption. To fit all scores in a single plot we have rescaled the
score values in the [0; 1] range.</p>
          <p>Algorithms are sorted by their NS score, best one on the left. The top row
(plot 1 and 2) considers the AGen methods on all I instances. The center row
considers the AGen [ APSF methods on the IPSF instances. The bottom row
considers the AGen [ ANU methods on the INU instances. Plots 1, 3, and 5
consider 187, 145 and 11 instances, respectively, which are the “easy” instances.</p>
          <p>All three scores seem to provide (almost) consistent rankings. The only
exception is plot 5. We suspect that this discrepancy can be explained by the small
number of available instances (i.e. 11). The MCC metric is not very significant
when we consider only the subset of instances solved by all algorithms, since
it does not reward those methods that complete more often that the others. It
is instead more significant for the right plots, which consider also the instances
where algorithms do not finish.</p>
          <p>In general, we can observe that FORCE methods seem to provide better
variable orderings when the easy instances are considered (left plots). When we
instead consider all the instances, FORCE methods appear to be less efficient.</p>
          <p>This suggests that FORCE methods could not scale well on larger instances.</p>
          <p>We observe a marginally favorable behaviour of the two methods FORCE-P
and FORCE-NU for the models with P-semiflows and structural units, compared
to the standard FORCE method. The general trend that seems to emerge from
these plots is that TOV, NOACK and SLO have the best average performance. This
is particularly evident in plot 2, which refers to the behaviour on “all” instances.</p>
          <p>One problem of this benchmark setup is that the MCC model set is made by
multiple parametric instances of the same model, and the number of model
instances vary largely (some models have just one, others have up to 40 instances).</p>
          <p>Usually, the relative performances of each algorithm on different instances of the
same model are similar. Thus, an instance-oriented benchmark is biased toward
those models that have more instances. Therefore, we consider a modified version
of the three metrics MSS, NS and MCC, where the value is normalized against
the number of instances Im of each considered model m 2 M0. Therefore, MSSa
is redefined as: MSSa = jM1 0j Pm2M0 jI1mj Pi2Im MSSa(i), and NS and MCC are
modified analogously.</p>
          <p>V K E1 E O E M G L
O C CS CS L CS C N C
T ONA FROEW FROEN S FROPT IK M</p>
          <p>V K O E1 E
O C L CS CS
T A S RE RE</p>
          <p>ON FOW FON</p>
          <p>M G E L
C INK CROSPT CM</p>
          <p>F
V K E E1 E E O M G P L
O C C CS CS CS L C IN C
T NOA FROP FROEW FROEN FROPT S K M</p>
          <p>V K O E E1 E E G M P L
O C L C CS CS CS N C C</p>
          <p>T AON S FROP FROEW FROEN FROPT IK M
(1)
Mesh Ø All,
PlotLeg1e. nds Ø "MSS", "NS", "MCC" ,</p>
          <p>8 &lt;
Frame Ø True
0.4</p>
          <p>D&lt;D
GenScorePlot4 8,145 ,GenScorePlot2 12,328 ,</p>
          <p>@FigurDe 3 shows the b@enchmDa&lt;rk results weighted by models. Plots 1, 3, and 5
GenScorePlot4 14,11 ,GenScorePlot2 18,65 *
8 @ D @ D&lt; L</p>
          <p>consider 48, 42 and 5 models, respectively, which are those models which have at
, ImageSize Ø 730, *AspectRatioØ1.05,* Spacings Ø -45, 0
*,
&lt;D
Easy instances, weighted bIn[5y772]m:=oGrdaeplhsi:csRowAllLiisntsLtiannePcleost, we1ig,ht2ed, b3y ,m4ode,ls:</p>
          <p>@8 @88 &lt; 8 &lt; 8 &lt; 8 &lt;&lt;
(3)
(5)
8
0.4</p>
          <p>&lt;
(1)
least an “easy” instance. The main difference observed is the performance of the
Cuthill-Mckee method, which on model-average does not seem to perform well.</p>
          <p>This is explained by the fact that CM0.p8erforms well on three models
(BridgeAndVehicle, Diffusion2D and SmallOper0.a6tingSystem) that have a large number of
instances. But when we look from a model point-of-view, the performance of CM
dVropKs d6own/8. M
TO CA /:11 :1O</p>
          <p>C IKNG ERC1SE ERCOSEN ERCOPST LCM TOV CAK :/116 :/18O ERC1SE ERCOSEN CM INKG ERCOPST LCM
AO s wLOe SLhave alreFOaWdyF staFted before, both MCNOL aLOndSL P FOmWeFthods areF consistently
N</p>
          <p>S S
the worst methods. In addition, neither TOV-NU nor NOACK-NU do not seem to
@8t8ake advantage of the prioritization on structural units, when compared to their
GenScorePlot187 NS187, "NS", 2, NSMarker, -0.5 ,</p>
          <p>D
@
base counterparts.</p>
          <p>GenScorePlot187 NS397, "NS", 2, NSMarker, -0.5</p>
          <p>D&lt;&lt;</p>
          <p>,
TOV, NOACK, FORCE-NES, FORCE-WES1, SLO, CM, KING, FORCE, MCL</p>
          <p>distributions of the algorithms in Figur&lt;e 4.</p>
          <p>TOV, NOACK, SLO, CM, KING, FORCE-WES1, FORCE-NES, FORCE, MCL
Spacings Ø -50, W20e,mAlaiygnamlseontoØbseRrvigehtth,aRtigwhthi,le8TToOpV, hToaps th,e highest NS and MSS scores, SLO
8 &lt; 88 &lt; &lt;&lt;
ImageSize Ø 730</p>
          <p>haDs the highest MCC score. To investigate this behaviour we look at the score
2 Untitled-45
Out[5859]=
1.
:
s
e0.8
c
n0.6
nifl
S it
Pw0.</p>
          <p>I(
)
s
: it1.
s n
na
Out[5935]=
Out[6095]=
8
8
In[6095]:= GraphicsGrid</p>
          <p>NS score distribution for the easy instances:</p>
          <p>NS score distribution for all instances.</p>
          <p>(2)</p>
          <p>Fig. 4. Score distributions for the by Instance case.</p>
          <p>PNSE’17 – Petri Nets and Software EngineeringNSLOxx@@i, k + 1DD - sloMean@@iDD
4A6I = TableBTotalBTableBIfBsloDevStd@@iDD ã 0, 1,</p>
          <p>F, 8i, Length@NSLOxxD
AM = TableBTotalB</p>
          <p>PTloatbl1eand 2 of Figure 4DDshãow0,t1h,e NNSSLsOcxoxr@e@id,isktr+i1bDuDti-osnlsoMoenanth@@ei“DeDasFie*r”slaonWdeight@@iDD, 8i, Length@NSL</p>
          <p>BIfBsloDevStd@@i
“all” instances. The y coordinate of each dot rselporDeesveSnttds thiDeDscore obtained by
@@
aHn*GarlagpohriicthsRmowf@o8rLainstiLnisnteaPnlcoet.@EAIa,cAhxebsoOxriwgiidntØh81is,2c0h&lt;o,sMeenshlaØrAglelDe,nLoiusgthLitnoePolboste@rAvMe,AxesOriginØ81,-5&lt;,MeshØ
tAhIeR =poTianbtled@eTnostiatyl.@TIanblbeo@t1h-psllootMsinT@O@ViDaDnêdNNSLOOAxCxK@@hia,vke+a1DbDe,h8aiv,ioLuerngtthha@tNSisLOmxxoDr&lt;eDD, 8k, 1, 14&lt;D;
pHA*oMGRlar=raipTzhaeibdclsetRh@oTawon@t8aSLlLi@OsT,taLbwilhnee@icP1hl-omstl@eoAaMIniRns,@Mt@ehisaDhtDØêAulNsluSDLa,OlLlxyixs@et@iLitih,nekerP+lt1hoDteD@y*AMpsRlr,ooMWdeeusichgeØhtAa@ln@liDaD&lt;lDDm,*oL8sit,- Length@NSLOxxD&lt;DD, 8k, 1,
oApITtiimmeal=vTaarbilaeblTeootradlerTianbgl,e@oTritmheeSyLOhxaxveia, kh+ig1hDeDr,c8hia,nLceengotfhfaNiSliLnOgxxaDt&lt;DpDr,oduk,ci1n,g14&lt;D;
@ @ @@ @ 8
oAnMTe.imOen= tThaebloet@hTeortahla@nTda,blSeL@OTismeeeSmLsOxtxo@@hia,vek +a1mDDo*reslboaWleaingchetd@@bieDhDa,v8ioi,urLeonngtmh@oNsStLOxxD&lt;DD, 8k, 1, 14&lt;D;
iWnWsvtIan=c8es, by score, even if i&lt;t;provides the best variable</p>
          <p>1.5a,s1h,ig1h,l1ig,h1t,ed1.4, 1th,e1,M1C, C0.9, 1.8, 1, 1, 1
WWvM = 81.2, 1, 1, 1, 1, 1.5, 1, 1, 1, 0.9, 1.8, 1, 1, 1 ;
oArIdTeirmieng*=feWwWveIr;tiAmMTeismteh*a=nWTWOvVM;. This happens because&lt;the MCC score is designed
to*GgrivaephailcmsoRsotwideLnistitcLailnpeoPilnotts tAoITaillmea,lgAoxreistOhrmigsitnhØat1c,o2m00p0le,tMee(sehxØcAelplt f,oLritshtLeibneesPtlot@AMTime,AxesOriginØ81,
H @8 @ 8 &lt; D
In[2197]:= aNlRgMo@rLi_thDm:=, HwLh-icLhP7tTaLkêesMaexx@tLr-a LpPo7iTnDt+s)1,;regardless of the “quality” of the variable
oLr1d=erLiinsgt.LiTnheePrleofto@r8eN,RtMh@eAImD,etNhRMo@dAItRhDa,t NoRnM@-aAvIeTriamgeeD&lt;p,erforms better will have a
higMheesrh MØCAlCl,sPclooret.Markers Ø 8Automatic, Small&lt;, PlotLegends Ø 8"MSS", "NS", "Time"&lt;D
L1 = ListLinePlot@8NRM@AID, NRM@AIRD, NRM@AITimeD&lt;, Mesh Ø All, PlotMarkers Ø 8Automatic, Small&lt;, AxesOr
L2 = ListLinePlot@8NRM@AMD, NRM@AMRD, NRM@AMTimeD&lt;, Mesh Ø All, PlotMarkers Ø 8Automatic, Small&lt;, AxesOr
4GraphicsRow</p>
          <p>Param@8eLt1e,rL2E&lt;Dstimation of the Sloan priority function
2.0 wÏeiÏ‡ghÏ‡ts ÏÊ‡ ÏÊ‡ Ê</p>
          <p>12 SloanPÊ‡arams.nb
Since‡ SloanÊ seems to provide vÊery good variable orders on average, we have
1.5
investigated whether tÏhe parametric priority function P (v0) can be tuned, since
Out[2198]= td1h.i0ffeeirmenptlÊepmuerpnInto[1as42te2i]:o=tnhNLaR1onMf=@oLSLr_ilÏÊ‡doDseat:rnLÏ‡i=innaHgevLÏ‡PaM-lilLoDaÏ‡PtbD7@lT8eÏÊ‡LNvRiêanMrMÏÊ‡@iaatAxhbI@eÏÊl‡DLe,sl-i.NbÏÊ‡LRrPMa7@rTAyDÊ‡I+RhDNM1a,;SsSNSbRMee@nAITdiemveeDlo&lt;p,ed for a
Figure 5 shows theMeMshSØS,AlNlS, PalnodtMTairmkeerslinØe8sAouftotmhaetiSclo,aSnmaalllg&lt;o,riPtlhomtLaepgepnlidesdØ 8"MSS", "NS", "Time"&lt;D
Ï T,ime
to 271 instances. TLh1e= rLeissutlLtisnhePalvoeÊt@b8eNeRnM@oAbItDa,inNeRdM@cAoInRDsidNeRrMin@gAItThiemesDu&lt;b,sMeetshofØ All, PlotMarkers Ø 8Automat
t0h.5 at terminate in leLGs2rsa=pthLhiiacsnstRL1oi0wn@em8PLli1on,tu@tL8e2NsR.MT@AhMeDs,e NinRMst@aAnMRceDs, NbReMlo@AnMgTitome5D&lt;6,dMieffsehreØnIAtll, PlotMarkers Ø 8Automat
models. The lines are rescaled, and cen&lt;tDered around the W1 = 1; W2 = 2 point,
whichÊ are the stand2a.0r6d coefficientsÏÊ‡ ofÊthe priority function.</p>
          <p>2 4 Ï8 Ï‡ 10 Ï‡ 12 Ê14</p>
          <p>‡ Ê‡
Out[2201]= 1.0
1.5
0.5
2.0 Ï Ï‡ Ï‡ ÏÊ‡ Ê‡
‡
Ê
Ê
Ê
‡</p>
          <p>Ê
Ï Ê‡
Ï</p>
          <p>Ï‡ Ï‡ ‡ÏÏÊ‡ ÏÊ‡ ÏÊ‡ ÏÊ‡
Out[1423ÏÊ‡]= 1.0 Ï Ê
Ê
1.5 Ê
W1 81 251 41 431 21 16 21 813 14 1150 18 11162 214 31142 W1 81 251 41 431 21 61 12 813 14 1150 18 11126 214 31124</p>
          <p>W2 Ê W2
In[1457]:= nS01 = T(aAb)lSec@oNrSeLsOpxexrPinis,ta1n+ce1sT.2, 8i, 14, Lengt6h@(NBS)LSO8cxoxrDes&lt;Dp;e10r mode1ls2. 14
nS07 = Table@NSLOxxPi, 1 + 7T, 8i, 1, Length@NSLOxxD&lt;D;
lnGoSrg1aL0pPh=@iXcFT_saiRDbgol.:we5=@@.8NLPlSieoLsrgOtfLxLoPxor@Pgm21AiL..05av,onsg1cBP‡e+@lÏ‡no1tStr2eÏÊ‡0@Tn18,ÏÊ‡d,88s8nÏÊ‡i1oS,,f0Ê‡171S,D&lt;loD,La,Êe8nnl1Êgo0mtg^ehL7t@P,hN@oS1AdL0vOs^fxoB7xr@&lt;Dn&lt;d&lt;S,iD1ff;X0e121&lt;,r...806,ennJStÏÊ‡o0iw7ÏÊ‡nDeeDiÏÊ‡gd&lt;hDÏÊ‡Øt 8ÏÊv‡TarÊ‡luuees,. False&lt;D;</p>
          <p>We experiment a parÏameter space Ï‡ofÏ‡thÏ‡e Ê‡WÏ1 ÏÊ‡raÏÊ‡tio in1.4the range going from 18</p>
          <p>Out[1426]= 1.0 Ê ÏÊ‡ WÊ‡2
to 312 . The figure shows that the rÏatio of 12 , which is th1e.2 standard vaÏlue Ïfor the</p>
          <p>Ê Ï Ï
aavlgeorraigteh.mAassbiemfoprlee,m0t.e5hnetÊepderin-intshteanlicberaprlioetss, issÏhonwots nsleicgehsts10l..a08yridlyifftehreenbtersetscuÏÊ‡hltoÊ‡sic‡teh‡aonnÏÊ‡ Ê‡ Ê‡ Ê‡
Ê
8
8TOV, NOACK, SLO, CM, KING, FORCE-WE1S1, FORCE-NES, FORCE, MCL&lt;</p>
          <p>We now consider the 18 , 12 and 16 versions for a more detailed analysis on the
full set of 386 instances with a time limit of 75 minutes, depicted in Figure 6.</p>
          <p>Each point represents an instance, where its x and y coordinates are the MDD
peak values computed using the compared parameter variations. Coordinates
Out[6095]= are represented on a log-log scale. Instances that did not finish are considered
as having an infinite peak node value, and are represented on the top (resp.
right) sides of the plot. Most instances stay close to the diagonal, which means
that there is almost no difference between the two parametric variations on
that instance. Some instance instead show different MDD peaks, and are mainly
concentrated toward the variations with W2 &gt; W1. We shall now refer to the
In[6305]:= ploagraLmP@eXt_rDic:v=aLriiasttioLnogoLfoSgPlolaont@w8i8t8h1,W11&lt;=,81L;aWrg2e=Val1,6 LaasrSgLeOV:al1&lt;/&lt;1,6.X&lt;, Joined Ø 8True, False&lt;D;
GraphicsRow@8logLP@SLO12vsTOVD, logLP@SLOvsFORCEWESD&lt;, ImageSize Ø 550D
1108</p>
          <p>SLO:1/16
106 is better
Out[6306]= 104
100
1108</p>
          <p>SLO:1/16
106 is better
104
100</p>
          <p>TOV
is better</p>
          <p>FORCE-WES1
is better
100
104
106
1108
100
104
106
1108
Fig. 7. Comparison of Sloan against Tovchigrechko and FORCE-WES1 methods.</p>
          <p>Figure 7 shows the comparison of SLO:1/16 against TOV and FORCE-WES1.</p>
          <p>Unlike the previous case, the plots show that many instances are far from the
diagonal, which means that the three algorithms perform well on different
instances.</p>
          <p>In[6412]:= GraphicsGrid@8
88&lt;0,888.GGGI6eee4mnnn9aSSS9gccc4eIInnooo9[[S55rrr,88i66eee00z0]]PPP::.e==lll6ØoooGG5tttrr97442aa63@@spp2PPPPMM0,81lhhllllee,,4oiiooooss01,1ccHtttthh.412ss*6MLMLØØ@51RR6AaeaeDD6oo3srgrgAA,,,ww7pkekell1GG@@eenen3ll9ee88crdrd8,,,nnLLtssss6SSiiR0DØØØØccss.a,oott7t8888rr6GLLi""MM3eeeiioMSMS1PPnnnØSSSS0llSee1SMSM1oocPP.C"a"a,ttoll0o,r,r22roo05lkk@@.ett"",oee811P@@NN*rrr228l88SSLD,,6,,o88""Sa436t11N,N,pt0252&lt;&lt;SSaa8""D8s,,MMc,MM&lt;Dlaai188CC0&lt;*orrn,22CC.L,1kkg"&lt;&lt;""82eesC,,&lt;&lt;2@rro9,,Ø2,,88l64332oMM8,&lt;&lt;,rCC-,,L6CC04i288MM.5sD44aa8,t&lt;4&lt;&lt;rr0"4H&lt;&lt;kk&lt;3*,,ee5D,rr2c&lt;&lt;,o,,l0o.r90s69n38,,1.n&lt;,,P
80.944659, 0.8841HH7**,PPll1oo.tt,SS0tt.yy8ll3ee5ØØ26TT4aa,bbll0ee.@@77C9o9l2o3r,Da0.t7a3@@418,7"8C,o0l.o7r3L4i8s7t8",DD0PP.c7o0l6o5r6s4PP,n0TT.TT6,8882n1,11P&lt;&lt;&lt;&lt;DD,,**LL</p>
          <p>ê Ø TrPuee
8TOV, NO4A8CK, SLOP:1NFFrr1Saa6Emm,ee’C1MØ7,T–KrIuNeGDDt,&lt;&lt;rFiDDONRCeEt-sWEaSn1d, FSOoRfCtEw-aNrEeS,EFnOgRCinE-ePeTrSin,gMCL&lt;
880.606736, 0.613525, 0.657693, 0.864351, 0.896892, 0.948357, 0.952498, 0.954346, 1.&lt;,
80.960153, 0.895719, 1., 0.755897, 0.753985, 0.689308, 0.676949, 0.710365, 0.656567&lt;&lt;
8TOV, NOACK, SLO:1Sêc1o6,reFsORbCEy-WiEnSs1t,aFnORcCeEs-:NES, CM, KING, FORCE-PTS, MSCcL&lt;ores by models:
Out[6412]=
1.</p>
          <p>Finally, Figure 8 shows the benchmark results after the substitution of SLO
with SLO:1/16. The plots report both the NS and MCC metrics by instances
(left) and by models (right) on “All” instances. SLO:1/16 performs better than
SLO in 71 instances ( 13 models) with a MDD peak reduction of 37% (24% for
models), while SLO beats SLO:1/16 in 66 instances ( 18 models) with a peak
reduction of 56% (20% for models). For 200 instances ( 28 models) the variable
ordering remains unchanged. Of the 386 tested models, SLO solves 340 of them,
and in 71 cases it provides the best ordering among the other algorithms.
Instead, SLO:1/16 solves 341 instances, being optimal in 81 cases. These data show
the favorable performance of SLO:1/16. This also confirms the observations of
Figure 6, i.e. the set of models where Sloan works well remains almost unaltered,
but SLO:1/16 has a higher chance of finding a better ordering.
5</p>
        </sec>
        <sec id="sec-3-5-2">
          <title>Conclusions and Future works</title>
          <p>
            In this paper we presented a comparative benchmark of 14 variable ordering
algorithms. Some of these algorithms are popular among Petri net based model
checkers, while others have been defined to investigate the use of structural
information for variable orderings. We observed that the Tovchigrechko/Noack
methods and the Sloan method have the best performances, and their
effectiveness cover different subsets of model instances. While the methods of
Tovchigrechko/Noack were designed for Petri net models, the method of Sloan is a
standard algorithm for bandwidth reduction of matrices, whose effectiveness for
variable ordering was pointed out only recently in [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ] and [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ]. We observed that
Sloan for variable ordering generation can be improved by changing the
priority function weights. The parameter estimation on Sloan seems to suggest that
the gradient (controlled by W2) has a higher impact than keeping a low matrix
profile (controlled by W1). We conjecture that other algorithms, like TOV, FORCE
or P, could be improved by using a super-imposed gradient. We did not observe
significant improvements for those algorithms that use structural information
of the net, and in some cases (TOV and NOACK) we even observed a degradation
in performance. It is therefore unclear whether and how this type of structural
information can be used to improve variable orderings. We also tested three
different metric scores. We observed an agreement between the MSS and the NS
score, which is nice since NS can be used even when few algorithms complete.
We also observed that MCC is a good score, that favours a different aspect
than MSS/NS (average behaviour over better ordering). The use of a per-model
weight on the scores has helped in identifying a bias in the benchmark results.
We think that some form of per-model weight is necessary when using the MCC
model set.
          </p>
          <p>
            It should be noted that we observed a different ranking than the one
reported in [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ]. That paper deals with metrics for variable ordering (without RS
construction), but in the last section the authors report a small experimental
assessment similar to our benchmark. In that assessment Tovchigrechko was not
tested, and the best algorithms were mostly Sloan and FORCE. For Sloan, they
used the default parameter setting with a rather different symmetrization of the
adjacency matrix, while it is not clear what variation of FORCE was used. This,
together with the fact that the tested model set was different (106 instances),
makes it difficult to draw any definite conclusions.
          </p>
        </sec>
        <sec id="sec-3-5-3">
          <title>Acknowledgement.</title>
          <p>We would like to thank the MCC team and all colleagues that collaborated
with them for the construction of the MCC database of models, and the Meddly
library developers.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Ajmone</given-names>
            <surname>Marsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Conte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Balbo</surname>
          </string-name>
          , G.:
          <article-title>A class of generalized stochastic Petri nets for the performance evaluation of multiprocessor systems</article-title>
          .
          <source>ACM Transactions on Computer Systems</source>
          <volume>2</volume>
          ,
          <fpage>93</fpage>
          -
          <lpage>122</lpage>
          (May
          <year>1984</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Aldinucci</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bagnasco</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lusso</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasteris</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vallero</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rabellino</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The Open Computing Cluster for Advanced data Manipulation (OCCAM)</article-title>
          .
          <source>In: 22nd Int. Conf. on Computing in High Energy and Nuclear Physics</source>
          . San Francisco (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Aloul</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Markov</surname>
            ,
            <given-names>I.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sakallah</surname>
            ,
            <given-names>K.A.</given-names>
          </string-name>
          :
          <article-title>FORCE: A fast and easy-to-implement variable-ordering heuristic</article-title>
          .
          <source>In: Proc. of GLSVLSI</source>
          . pp.
          <fpage>116</fpage>
          -
          <lpage>119</lpage>
          . ACM,
          <string-name>
            <surname>NY</surname>
          </string-name>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Amparore</surname>
            ,
            <given-names>E.G.</given-names>
          </string-name>
          :
          <article-title>Reengineering the editor of the greatspn framework</article-title>
          .
          <source>In: PNSE@ Petri Nets</source>
          . pp.
          <fpage>153</fpage>
          -
          <lpage>170</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Amparore</surname>
            ,
            <given-names>E.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Balbo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beccuti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Donatelli</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Franceschinis</surname>
          </string-name>
          , G.:
          <article-title>30 Years of GreatSPN, chap</article-title>
          . In:
          <article-title>Principles of Performance and Reliability Modeling and Evaluation: Essays in Honor of Kishor Trivedi</article-title>
          , pp.
          <fpage>227</fpage>
          -
          <lpage>254</lpage>
          . Springer, Cham (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Amparore</surname>
            ,
            <given-names>E.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beccuti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Donatelli</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>(stochastic) model checking in GreatSPN</article-title>
          . In: Ciardo,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Kindler</surname>
          </string-name>
          , E. (eds.) 35th
          <source>Int. Conf. Application and Theory of Petri Nets and Concurrency</source>
          , Tunis. pp.
          <fpage>354</fpage>
          -
          <lpage>363</lpage>
          . Springer, Cham (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Baarir</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beccuti</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cerotti</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pierro</surname>
            ,
            <given-names>M.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Donatelli</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Franceschinis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>The GreatSPN tool: recent enhancements</article-title>
          .
          <source>Performance Eval</source>
          .
          <volume>36</volume>
          (
          <issue>4</issue>
          ),
          <fpage>4</fpage>
          -
          <lpage>9</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Babar</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miner</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Meddly: Multi-terminal and edge-valued decision diagram library</article-title>
          .
          <source>In: Quantitative Evaluation of Systems</source>
          , International Conference on. pp.
          <fpage>195</fpage>
          -
          <lpage>196</lpage>
          . IEEE Computer Society, Los Alamitos, CA, USA (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Bollig</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wegener</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Improving the variable ordering of OBDDs is NP-complete</article-title>
          .
          <source>IEEE Trans. Comput</source>
          .
          <volume>45</volume>
          (
          <issue>9</issue>
          ),
          <fpage>993</fpage>
          -
          <lpage>1002</lpage>
          (
          <year>Sep 1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Bryant</surname>
          </string-name>
          , R.E.:
          <article-title>Graph-based algorithms for boolean function manipulation</article-title>
          .
          <source>IEEE Transactions on Computers</source>
          <volume>35</volume>
          ,
          <fpage>677</fpage>
          -
          <lpage>691</lpage>
          (
          <year>August 1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Burch</surname>
            ,
            <given-names>J.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clarke</surname>
            ,
            <given-names>E.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Long</surname>
            ,
            <given-names>D.E.</given-names>
          </string-name>
          :
          <article-title>Symbolic model checking with partitioned transition relations</article-title>
          .
          <source>In: IFIP TC10/WG 10.5 Very Large Scale Integration</source>
          . pp.
          <fpage>49</fpage>
          -
          <lpage>58</lpage>
          . North-Holland (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Ciardo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , L uttgen, G.,
          <string-name>
            <surname>Siminiceanu</surname>
          </string-name>
          , R.: Saturation:
          <article-title>An efficient iteration strategy for symbolic state-space generation</article-title>
          .
          <source>In: TACAS'01</source>
          . pp.
          <fpage>328</fpage>
          -
          <lpage>342</lpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Ciardo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marmorstein</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siminiceanu</surname>
          </string-name>
          , R.:
          <article-title>Saturation unbound</article-title>
          .
          <source>In: In Proc. of TACAS 2003</source>
          . pp.
          <fpage>379</fpage>
          -
          <lpage>393</lpage>
          . LNCS 2619, Springer (apr
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ciardo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>A.J.</given-names>
          </string-name>
          :
          <article-title>Saturation-based symbolic reachability analysis using conjunctive and disjunctive partitioning</article-title>
          .
          <source>In: Proc. CHARME</source>
          . pp.
          <fpage>146</fpage>
          -
          <lpage>161</lpage>
          . LNCS 3725, Springer (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Cuthill</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McKee</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Reducing the bandwidth of sparse symmetric matrices</article-title>
          .
          <source>In: Proc. of the 1969 24th National Conference</source>
          . pp.
          <fpage>157</fpage>
          -
          <lpage>172</lpage>
          . ACM, New York (
          <year>1969</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. F.
          <article-title>Kordon et all: Complete Results for the 2016 Edition of the Model Checking Contest</article-title>
          . http://mcc.lip6.fr/2016/results.php (
          <year>June 2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Garavel</surname>
          </string-name>
          , H.:
          <article-title>Nested-Unit Petri Nets: A structural means to increase efficiency and scalability of verification on elementary nets</article-title>
          .
          <source>In: 36th Int. Conf. Application and Theory of Petri Nets</source>
          , Brussels. pp.
          <fpage>179</fpage>
          -
          <lpage>199</lpage>
          . Springer, Cham (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Heiner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rohr</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwarick</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tovchigrechko</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>MARCIE's secrets of efficient model checking</article-title>
          .
          <source>In: Transactions on Petri Nets and Other Models of Concurrency XI</source>
          . pp.
          <fpage>286</fpage>
          -
          <lpage>296</lpage>
          . Springer, Heidelberg (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Kamp</surname>
          </string-name>
          , E.:
          <article-title>Bandwidth, profile and wavefront reduction for static variable ordering in symbolic model checking</article-title>
          .
          <source>Tech. rep.</source>
          , University of Twente (June,
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>King</surname>
            ,
            <given-names>I.P.:</given-names>
          </string-name>
          <article-title>An automatic reordering scheme for simultaneous equations derived from network systems</article-title>
          .
          <source>Journal of Numerical Methods in Eng. 2</source>
          (
          <issue>4</issue>
          ),
          <fpage>523</fpage>
          -
          <lpage>533</lpage>
          (
          <year>1970</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Kumfert</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pothen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Two improved algorithms for envelope and wavefront reduction</article-title>
          .
          <source>BIT Numerical Mathematics</source>
          <volume>37</volume>
          (
          <issue>3</issue>
          ),
          <fpage>559</fpage>
          -
          <lpage>590</lpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Meijer</surname>
          </string-name>
          , J., van de Pol, J.:
          <article-title>Bandwidth and wavefront reduction for static variable ordering in symbolic reachability analysis</article-title>
          .
          <source>In: bth NASA Formal Methods</source>
          ,
          <year>2016</year>
          , Minneapolis. pp.
          <fpage>255</fpage>
          -
          <lpage>271</lpage>
          . Springer, Cham (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Miner</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          :
          <article-title>Implicit GSPN reachability set generation using decision diagrams</article-title>
          .
          <source>Performance Evaluation</source>
          <volume>56</volume>
          (
          <issue>1-4</issue>
          ),
          <fpage>145</fpage>
          -
          <lpage>165</lpage>
          (mar
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Noack</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A ZBDD package for efficient model checking of Petri nets (in German)</article-title>
          .
          <source>Ph.D. thesis</source>
          , BTU Cottbus, Department of CS (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Rice</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kulhari</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A survey of static variable ordering heuristics for efficient BDD/MDD construction</article-title>
          .
          <source>Tech. rep.</source>
          , University of California (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Siminiceanu</surname>
            ,
            <given-names>R.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ciardo</surname>
          </string-name>
          , G.:
          <article-title>New metrics for static variable ordering in decision diagrams</article-title>
          .
          <source>In: 12th Int. Conf. TACAS 2006</source>
          . pp.
          <fpage>90</fpage>
          -
          <lpage>104</lpage>
          . Springer, Heidelberg (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Sloan</surname>
            ,
            <given-names>S.W.:</given-names>
          </string-name>
          <article-title>An algorithm for profile and wavefront reduction of sparse matrices</article-title>
          .
          <source>International Journal for Numerical Methods in Engineering</source>
          <volume>23</volume>
          (
          <issue>2</issue>
          ),
          <fpage>239</fpage>
          -
          <lpage>251</lpage>
          (
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Tovchigrechko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Model checking using interval decision diagrams</article-title>
          .
          <source>Ph.D. thesis</source>
          , BTU Cottbus, Department of CS (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Van Dongen</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A cluster algorithm for graphs</article-title>
          .
          <source>Inform. systems 10</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>40</lpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>