=Paper= {{Paper |id=None |storemode=property |title=Symbolic Model Checking of Stochastic Reward Nets |pdfUrl=https://ceur-ws.org/Vol-928/0343.pdf |volume=Vol-928 |dblpUrl=https://dblp.org/rec/conf/csp/Schwarick12 }} ==Symbolic Model Checking of Stochastic Reward Nets== https://ceur-ws.org/Vol-928/0343.pdf
    Symbolic model checking of stochastic reward
                       nets

                                Martin Schwarick

                         Computer Science Department
                      Brandenburg University of Technology
                              Cottbus, Germany



      Abstract. This paper describes a symbolic model checking approach for
      the Continuous Stochastic Reward Logic (CSRL) and stochastic reward
      nets, stochastic Petri nets augmented with rate rewards. CSRL model
      checking requires the computation of the joint distribution of time and
      accumulated reward, which is done by Markovian approximation. An
      implementation is available in the model checker MARCIE. It applies a
      symbolic on-the-fly computation of the underlying matrix using Interval
      Decision Diagrams. The result is a multi-threaded model checker which
      is able to evaluate CSRL formulas for models with millions of states
      depending on the required accuracy.


1    Introduction
A Continuous-time Markov chain (CTMC) augmented by a reward structure
defines a Markov reward model (MRM). The reward structure associates rate
rewards (also interpretable as costs) with the states of the CTMC. In [BHHK00]
the Continuous Stochastic Reward Logic (CSRL) has been introduced as an
expressive and flexible language to specify properties of MRMs. CSRL model
checking is based on the computation of the joint distribution of time and accu-
mulated reward, which can be realized in different ways [Clo06]. However, with
increasing model size and accuracy most of these techniques fail because of an
infeasible runtime. So far there are no efficient tool implementations available.
In this paper we present a symbolic model checking approach for CSLR and its
implementation in MARCIE [SRH11], a symbolic and multi-threaded on-the-fly
model checker for stochastic Petri nets (SPN). We consider MRMs induced by
stochastic reward nets, which are SPNs augmented by a reward structure.


2    Markov Reward Models
Coupling a CTMC C = [S, R, L, s0 ], where S is a finite set of states, R : S ×S →
R+ the transition rate matrix, L : S → AP a labelling function mapping to each
state a set of atomic propositions from AP and s0 the initial state, with a reward
structure ̺ : S → R+ yields a MRM M = [C, ̺]. The reward structure ̺ defines
for each state a possibly state-dependent rate reward which will be weighted
with the sojourn time the CTMC resides in this state and accumulated over
time, as the Markov process evolves. Note that one could also associate impulse
rewards with the transitions.
    In general a CTMC is specified using some kind of high level formalism, in
our case by means of an SPN N = [P, T, F, I, R, V, s0 ] , where P is a finite set of
places, T a finite set of transitions, F : P × T ∪ T × P → N the set of arc weights,
I : P × T → N the set of inhibitor arc weights, and R : P × T → N the set of
read arc weights. A zero weight means that the arc does not exist. The latter two
relations extend the common firing semantics of transitions and allow to define
additional enabling conditions; an inhibitor arc defines an upper bound, a read
arc a lower bound for the number of tokens. The mapping s : P → N associates
with each place an amount of tokens and represents a state of the SPN. s0 is
the initial state. The mapping V : T → H, where H is a set of generally state-
dependent functions, associates with each transition a rate function ht from H,
defining a negative exponentially distributed delay. If a transition t is enabled
                                                                   t
in state s, its firing leads to the state s′ which is denoted as s −
                                                                   → s′ .
    Assuming that a state transition is given by a unique Petri net transition the
entry R(s, s′ ) is defined as:
                                    (
                                                            t
                               ′                            → s′
                                      ht (s) if ∃ t ∈ T : s −
                         R(s, s ) =
                                      0      otherwise.

The exit rate E(s) = Σs′ ∈S R(s, s′ ) is the sum of the matrix row indexed with
s. A state s with E(s) = 0 is called an absorbing state, since there is no way to
                                                                                 t
leave it when reached. The probability of the state transition s −              → s′ is P(s, s′ ) =
ht (s)/E(s). Its probability within τ time units is P(s, s ) · (1 − e−E(s)·τ ). The
                                                                           ′

matrix P describes the so-called embedded discrete-time Markov chain. A path
σ = (s0 , τ0 ), . . . , (si , τi ), . . . is any finite or infinite sequence with R(si , si+1 ) > 0,
where τi is the sojourn time in state si . σ[i] gives state si , σ(τ ) the index of the
occupied state at time τ , and |σ| denotes the length of σ. The set P aths contains
all paths starting in s.
    The transient probability π C (α, s, τ ) is the probability to be in state s at
time τ starting from a certain probability distribution α, with α : S → [0, 1] and
Σs∈S α(s) = 1. If we consider a single initial state s0 we have α(s0 ) = 1 and
write πsC0 (s, τ ) for short.
    The computation of transient probabilities (transient analysis) of CTMCs
can be realized by uniformization [Ste94]. The basic operation is vector-matrix
multiplication which must be done for a certain number of iterations.
    The tuple N M = [N, ̺] where N is an SPN and ̺ a reward structure is
a simple type of stochastic reward net. In contrast to the definition given in
[CBC+ 93] we do not allow impulse rewards, state-dependent arc weights and
immediate transitions.
    A stochastic process X is often formalized as a collection of random variables
{Xτ : τ ∈ I} over a domain Ω and an index set I. In our context, Ω is the
finite set of discrete states S and I = R+ is the set of real numbers which we
             8                1                     2
                                                                   RS [ demo ]
                                                                   {
                                                                   active >0: 100;
12                                  broken
                                                                   broken > 0 : 5;
          active   intact   empty            idle       sleeping   idle > 0 : 50;                          8                1                              2

                                                                   sleeping >0: 20;                              intact

              6               1                     3
                                                                   }                  12
                                                                                                                                   broken
                                                                                                      active              empty                     idle   sleeping


                                                                                                           6                1                              3




Fig. 1. A stochastic reward net inducing the MRM                                                     100
                                                                                                      ∆
                                                                                                                                         5
                                                                                                                                         ∆
                                                                                                                                                    50
                                                                                                                                                    ∆
                                                                                                                                                                             20
                                                                                                                                                                             ∆




given in [Clo06]. The reward structure demo is de-                                            y
                                                                                              ∆
                                                                                                +1                                y
                                                                                                                                  ∆
                                                                                                                                    +1       y
                                                                                                                                             ∆
                                                                                                                                               +1                     y
                                                                                                                                                                      ∆
                                                                                                                                                                        +1



fined by four reward items [KNP07]. Each state                                                       py                               py        py                       py


where e.g. place active has at least one token, spec-
ified by the guard active > 0, earns a reward of 100.                                 Fig. 2. The bounded approx-
If a state satisfies the guard of several reward items,                               imating SPN N A for the
the actual rate reward is given as the sum of the                                     stochastic reward net given in
single rewards.                                                                       Fig. 1



interpret as time. Xτ = s′ denotes that at time τ the random variable has the
value s′ , P r{Xτ = s′ |X0 = s} = P rs {Xτ = s′ } denotes the probability of this
situation given that initially its value is s. In the following we may use C and
X synonymously.
    The reward structure ̺ defines the stochastic process Y with the continuous
domain R+ but the same index set as X. Yτ denotes the accumulated reward at
time τ . For a path σ the accumulated reward at time τ is given as
                                             σ(τ )−1                                       σ(τ )−1
                                              X                                             X
                            Yτ (σ) =                          ̺si · τi + ̺sσ(τ ) · (τ −                        τi ).
                                              i=0                                           i=0


3       Continuous Stochastic Reward Logic
CSRL [BHHK00] is an extension of the popular Computation Tree Logic (CTL)
[CES86]. Temporal operators are decorated with a time interval I, and a reward
interval J, and path quantifiers are replaced by the probability operator P.
Syntax. The CSRL syntax is defined inductively given state formulas

                                      φ ::= true | ap | ¬φ | φ ∨ φ | P⊲⊳p [ψ] ,

and path formulas
                                                          ψ ::= XJI φ | φ UJI φ
with ap ∈ AP , ⊲⊳ ∈ {<, ≤, ≥, >}, p ∈ [0, 1], and I, J ⊆ R+ . For convenience we
introduce the operators FJI φ ≡ true UJI φ, Φ ∧ Ψ ≡ ¬(¬Φ ∨ ¬Ψ ) and Φ ⇒ Ψ ≡
¬Φ ∨ Ψ .
Semantics. CSRL state formulas are interpreted over the states and path formu-
las over the paths of the MRM M . For state formulas we define the satisfaction
relation |= as follows
     – s |= ap ⇔ ap ∈ L(s)
 – s |= ¬Φ ⇔ s 6|= Φ
 – s |= Φ ∨ Ψ ⇔ s |= Φ ∨ s |= Ψ
 – s |= P⊲⊳p [ψ] ⇔ P robM
                        s (ψ) ⊲⊳ p,

with P robMs (ψ) = P r{P athss,ψ } as the probability to satisfy the path formula
ψ starting in s. The set P athss,ψ = {σ ∈ P athss | σ |= ψ} is measurable [Clo06].
Sat(φ) denotes the set of states satisfying a state formula φ. For path formulas
|= is defined as

 – σ |= XJI Φ ⇔ |σ| ≥ 1 ∧ τ0 ∈ I ∧ σ[1] |= Φ ∧ Yτ0 (σ) ∈ J
 – σ |= ΦUJI Ψ ⇔ ∃τ ∈ I : σ(τ ) |= Ψ ∧ ∀τ ′ < τ : σ(τ ′ ) |= Φ ∧ Yτ (σ) ∈ J

Examples. Survivability: In [Clo06] CSRL is used to define a notion of surviv-
ability for critical systems. For a state the survivability can be specified as the
state formula
                                            [0,τ ]
                          disaster ⇒ P≤p [F[0,y] recovered],
which means that with a given probability the system is able to be recovered
from a disaster state within τ time units and recovery costs of at most of y.
Advanced path properties: CRSL formulas can be used to specify additional
constraints on paths. Consider the reward structure ρφ : S → {0, 1} with
                                   (
                            φ        1 if s ∈ Sat(φ)
                           ρ (s) =
                                     0 otherwise,

which assigns to each φ-state a reward of one. The induced process Yφ accumu-
lates for each path the sojourn time in φ-states. Thus we can specify for instance
the following property
                                          [τ,τ ]
                                  P⊲⊳p [ΦU[0, τ ] Ψ ],
                                             2

                               [τ,τ ]
which is only fulfilled on (ΦU    Ψ ])-paths, if at most the half of the time τ is
spent in φ-states. Further examples for the usefulness of CSRL can be found in
[BCH+ 10].
Model checking. Model checking is the technique which answers the question,
whether a model specification M satisfies a given property φ, formally written
as M |= φ? In the scenario on hand, M is given as a stochastic reward net and
φ as CSRL formula.
   The basic CSRL model checking algorithm is similar to that of CTL [CES86].
Given the formula tree, each sub-formula is evaluated for all states starting
with the innermost formulas and ending with the top formula. The most time-
consuming part is the computation of the probability P robM s (ψ) for all s ∈ S. In
the following we are interested in the complete vector P robM (ψ). Especially in
the case of the U -operator expensive numerical computations have to be done.
Continuous Stochastic Logic (CSL). Before discussing CSRL specific model
checking we will sketch the procedure for CSL [ASSB00], the CSRL subset where
J = [0, ∞). In this case we omit J for readability. CSL model checking can be
reduced to the numerical computation of standard CTMC measures [BHHK03]
and has been implemented in several tools.
Next. The evaluation of the path formula ψ = X I Φ means to compute for each
state the probability
                                                              X
                             −E(s)·inf(I)
           P robM     I
                s (X Φ) = (e              − e−E(s)·sup(I) ) ·   P(s, s′ )
                                                           s′ ∈Φ

that a state transition leading to a φ-state occurs within the given time interval
[BHHK03]. The evaluation requires a single multiplication of a vector and the
matrix P.
Until. The evaluation of this operator can be reduced to transient analysis on a
modified Markov chain where several states have been made absorbing depending
on the given formula. In this context M [φ] denotes the Markov chain derived
from M making all φ-states absorbing.
   The following path formulas have to be considered:
1) ΦU[0,τ ] Ψ : In this case it holds that
                                           X
                               [0,τ ]
                   P robM
                        s (ΦU         Ψ) =      πsM[¬Φ∨Ψ ] (s′ , τ )
                                         s′ ∈Sat(Ψ )


A simple transient analysis is necessary while making (¬Φ ∨ Ψ )-states absorbing.
2) ΦU[τ,τ ] Ψ : Paths leading to (¬Φ)-states are cut and it holds
                                           X
                               [τ,τ ]
                   P robMs (ΦU        Ψ) =       πsM[¬Φ] (s′ , τ )
                                          s′ ∈Sat(Ψ )

          ′
3) ΦU[τ,τ ] Ψ : In this case we deal with an inhomogeneous CTMC, as the tran-
sition relation R is time-dependent. Before reaching the lower time bound, it is
possible to leave a Ψ -state, provided that it fulfills Φ. After reaching the lower
time bound also Ψ -states become absorbing. The computation requires two steps.
It holds that
             [τ,τ ′ ]
P robMs (ΦU           Ψ) =
                                                                        
                                              M[¬Φ∨Ψ   ]
           X                               X
                      πsM[¬Φ] (s′ , τ ) ·   πs′         (s′′ , τ ′ − τ )
       s′ ∈Sat(Φ)                 s′′ ∈Sat(Ψ )


We ignore the case that sup(I) = ∞, as it can not be handled solely by transient
analysis. In this case we have to solve a linear system of equations [BHHK03]. It
requires a transient analysis for all reachable states to compute P robM (ΦU I Ψ ).
In [KKNP01] this method has been improved such that one global transient
analysis is carried out backwards to compute the probability to fulfill a path
formula for all states in one pass; the result is the vector P robM (ΦU I Ψ ).
   CSRL. Now the evaluation of path formulas requires to consider the given
reward interval J.
Next. Assuming that state s earns a positive reward ̺s , it is possible to derive
the time interval Is by dividing all elements of J by ̺s . Is represents the set
of sojourn times in s which do not break the given reward constraints. This
observation results in
                       P robM   I           M
                            s (XJ Φ) = P robs (X
                                                 I∩Is
                                                      Φ).
Given that ̺s = 0 we have to distinguish two cases: if inf(J) = 0, we have only
to consider I, otherwise state s can not fulfill the formula.
Until. At the first glance there is no standard technique which can be easily
applied to evaluate the full spectrum of possible combinations of time and reward
bounds. Contributions in the literature [HCHK02,BCH+ 10] often consider the
                 [0,τ ]
special case ΦU[0,y] Ψ for which one can apply a similar strategy as for CSL. In
this case it is possible to derive a MRM M ′ by making Ψ - and ¬(Φ ∧ Ψ )-states
absorbing. For M ′ it holds [HCHK02]
                                  [0,τ ]            ′   [τ,τ ]
                     P robM                     M
                          s (ΦU[0,y] Ψ ) = P robs (F[0,y] Ψ ).

This allows to reduce the problem to the computation of the joint distribution
of time and reward, also known as performability,

                            P rs {Xτ ∈ S ′ , Yτ ∈ [0, y]},
which is the probability to be at time τ in a state s ∈ S ′ (here S ′ is Sat(Ψ ))
and having accumulated at most a reward of y. An elaborated consideration of
CSRL model checking and related techniques to compute the joint distribution
of time and accumulated reward, among them Markovian approximation, can
be found in [Clo06].

Markovian approximation. Markovian approximation is a technique to com-
pute the joint distribution by applying standard techniques as transient analysis
to a CTMC which approximates the behaviour of the actual MRM. The accu-
mulated reward will be encoded in the discrete states. This requires to replace
the continuous accumulation of rewards by a discrete one. The discretized accu-
mulated reward increases stepwise by a fixed value ∆. It requires y/∆ steps to
approximately accumulate a reward of y. After n steps the actual accumulated
reward lies in the interval [n · ∆, (n + 1) · ∆), which is the nth reward level.
When the reward level is considered as a part of the model, the states of this
CTMC are tuples (s, i) where s ∈ S is a state of the original model and i ∈ N
the reward level. On level i all state transitions of the original model, given by
the transition relation R, are possible and now denoted as Ri . The increase of
the accumulated reward, the steps to the next level, must be realized by an ad-
ditional state transition relation Ri,i+1 which maps the reward function of the
MRM to stochastic rates. During one time unit the original MRM accumulates
a reward of ̺s in the state s. In the derived CTMC C A , we must observe ̺∆s
transitions per time unit in state s which increase the discretized reward level.
For each state (s, i) there must be a state transition to the state (s, i + 1) with
rate ̺∆s .
    The state space is infinite but countable as the reward growths monotoni-
cally without an upper bound. The resulting CTMC’s rate matrix RA has the
following structure and represents a special type of a so-called quasi birth-death
process (QBD) [RHC05].
                                                           
                          R0 R0,1 0 . . . . . . . . . . . .
                         0 R1 R1,2 0 . . . . . . . . . 
                                                           
                     A
                                     .. ..                 
                    R = ... 0
                                       .   .  0 ... ...   
                         . . . . . . 0 Ri Ri,i+1 0 . . . 
                                                           
                                              ..   ..
                          ... ... ... ...        .    . ...

   The probability P rs {Xτ ∈ S ′ , Yτ ∈ (j∆, (j + 1)∆]} to reach at time τ a state
from S ′ while accumulating a reward of (j∆, (j + 1)∆] is approximated by

                          P r(s,0) {XτA ∈ {(s′ , j)|s′ ∈ S ′ }}.

    Uniformization-based transient analysis and CSL model checking of infinite
QBDs is feasible [RHC05] as transient analysis for time t requires only a finite
number n of computation steps. This number only depends on the given time t,
a constant λ, which should be at least the maximal exit rate, and an error bound
ǫ. Thus only a finite number of states will be considered; in fact all the states
which are reachable from the initial state by paths with a maximal length of n.
In the scenario on hand we can use y instead of t to define the finite subset of
reward levels. Let Si denote the state at, S>i above and S≤i below or at reward
                                                                  y
level i. For given y and ∆ we define the boundary level r = ∆         representing the
reward value y. States with a higher reward value than y can be abstracted to
the set S>r and states with at most a value of y to the set S≤r . Starting at
level 0 one has to cross r + 1 levels (including level 0) to exceed the specified
reward bound. Thus we can make all states in Sr+1 absorbing and get a finite
CTMC. Alternatively we can use the CSL model checking algorithm to justify
that only this finite subset of levels has to be considered. Given the infinite
CTMC C A the joint distribution P rs {Xτ ∈ S ′ , Yτ ∈ [0, y]} is approximated
by P rs {XτA ∈ S ′ ∩ S≤r } whereby only the probability of states s ∈ S0 is of
                                                          A
                                                                   [τ,τ ] ′
interest. For all s ∈ S0 we have to compute P robC       s (S≤r U        S ∩ S≤r ), the
probability for all states at reward level 0 to be at t time in a state from S ′ while
residing only in S≤r -states. As the algorithm makes all (¬Φ)-states, all (S>r )-
states absorbing, (S>r+1 )-states will not be reachable in the resulting CTMC
C A . Stochastic Petri net interpretation. To approximately compute the
joint distribution we will perform transient analysis on the derived CTMC C A
using the fast method suggested in [KKNP01]. As we achieve this using an on-
the-fly computation of the rate matrix, a high-level description of it by means
of a stochastic Petri net is required. Thus we will extend the existing net by
an additional place, whose markings represent the reward levels. The number of
tokens on this additional place py increases only by the firing of the transitions
which define the relation Ri,i+1 .
    We derive this set of transitions T ̺ directly from the reward structure defi-
nition. Similar to [KNP07] we define a reward structure as a set of reward items,
as can be seen in Figure 1. A reward item consists of a guard g and a reward
function r. The guard g is given as an interval logic function defining a set of
states. Reduced Ordered Interval Decision Diagrams (IDD) are a canonical rep-
resentation for interval logic functions [ST10] and allow easily to extract the set
of reward transitions, as it is illustrated in [Sch12]. These transitions induce the
SPN N ̺ = [P, T ̺ , ∅, I ̺ , R̺ , V ̺ , ∅].
    So far the transitions T ̺ only define a reward function for certain sets of
states, but do not affect the set of reachable states. Two steps remain:
 1. The firing of these transitions must increase the number of tokens on py .
 2. The rate reward function must be transformed to a stochastic rate. For all
    transitions t ∈ T ̺ the new function must be changed to ht /∆.
Given the original stochastic reward net as the SPN N = [P, T, F, I, R, V, s0 ], the
reward structure ̺ in turn as the net N ̺ = [P, T ̺ , ∅, I ̺ , R̺ , V ̺ , ∅] and a specified
reward step ∆ we can define the SPN N A = [P A , T A , F A , I A , RA , V A , sA    0 ] with
P A = P ∪ {py }, T A = T ∪ T ̺ , F A = F ∪ {((t, py ), 1)|t ∈ T̺ }, I A = I ∪ I ̺ ∪
          y
{(py , t, ∆ + 1)|t ∈ T ̺ }, RA = R ∪ R̺ , V A = V ∪ {(t, ht /∆)|(t, ht ) ∈ V ̺ } and
 A
s0 = s0 . The set of inhibitor arc weights is already prepared to define the finite
version of C A . Figure 2 shows the stochastic Petri net derived from the stochastic
reward net in Figure 1.

CSRL model checking based on CSL. We argued with the CSL model
checking algorithm to derive a finite subset of the approximating CTMC C A . We
have now specified the related high level description N A . This idea also suggests
to realize CSRL model checking on top of an existing CSL model checker using
the SPN N A and the CSL formulas given in Table 1. The constraints concerning
the discretized accumulated reward given as interval J are just encoded into state
formulas. However, this explicit approach has several drawbacks. First, we will
have in most cases a significant higher memory consumption as even a symbolic
CSL model checker stores the exit rates explicitly and has to allocate a vector
in the size of the state space of C A . To illustrate this problem, Table 6 shows
the state space for different accuracy levels for the scalable FMS system [CT93].
Second, when nesting CSRL formulas or checking a set of formulas this would
require to reconstruct C A as we will consider different reward bounds. Moreover
some combinations of time and reward intervals I and J allow some optimization
in terms of the necessary reward levels or the time bounds. E.g. if ̺min > 0,
which means that the reward function is total, we can possibly decrease the lower
                    y                                    y
time bound. If ̺min    > τ then τ can be replaced by ̺min    which may reduce the
number of required uniformization steps. If τ · ̺max < y then y ′ can be replaced
                                               ′           ′

by ̺max · τ ′ which may reduce the number of reward levels. Further we can check
whether the given bounds are consistent: e.g. if τ ′ · ̺max < y or τ · ̺min > y ′ no
path will fulfill the CSRL formula.
For these reasons we propose to automatically
 1. generate the approximating SPN from the actual MRM specification as de-
    scribed above,
 2. encode the actual CSRL formula implicitly for the CSL model checking al-
    gorithms by adapting the transition relation.
We offer a reference implementation of this approach in the model checker MAR-
CIE. In the next section we will sketch some implementation issues.

4    Implementation in MARCIE
MARCIE [SRH11] is a symbolic model checker for stochastic Petri nets offering
besides symbolic state space analysis (e.g. CTL model checking) simulative and
numerical engines for quantitative analysis as model checking of CSL and the
reward extensions defined in [KNP07]. Its symbolic model checking engines are
based on Interval Decision Diagrams which are used to encode state sets of
bounded Petri nets [ST10]. In contrast to other approaches, numerical analysis is
based on a multi-threaded on-the-fly computation of the rate matrix. This means
that the matrix is not encoded using dedicated date structures as sparse matrices
or multi terminal decision diagrams. The computation of the matrix entries is
realized by an emulation of the firing of Petri net transitions given the IDD-
based state space encoding. This technique does not require structured models,
is not sensitive concerning the number of distinct matrix entries, and is often
able to outperform other symbolic approaches aiming on the numerical analysis
of huge CTMCs [SH09,HRSS10,SRH11]. However, the on-the-fly computation
requires a high-level description of the CTMC in terms of a stochastic Petri net.
Thus the discussed Markovian approximation is the method of choice for the
implementation of our CSRL model checker.


 CSRL CSL                                   J         v[i] = 1 ⇔ s(i mod |S|) ∈ Sat(Ψ ) and
   I                       y
ΦU[y,y]   Ψ ΦU I (Ψ ∧ py = ∆ )              [y, y] |S| · (l − 2) ≤ i < |S| · (l − 1)
   I           I           y
ΦU[0,y] Ψ ΦU (Ψ ∧ py ≤ ∆     )              [0, y] 0 ≤ i < |S| · (l − 1)
                                                              y
  I
ΦU[y,y         I
       ′ ] Ψ ΦU (Ψ ∧ py >
                           y           ′
                              ∧ py ≤ y∆ )   [y, y ′ ] |S| · ⌊ ∆ ⌋ < i ≤ |S| · (l − 1)
                           ∆


Table 1. CSL representation of several      Table 2. Initial values for the compu-
CSRL formulas for the approximating         tation vectors depending on the CRSL
SPN N A .                                   formula.


    In the following we exclude the case sup(I) ≡ ∞ as we concentrate on the
cases where we can apply transient analysis. However, our tool supports full
CSRL which includes time and reward intervals with unspecified upper bounds.
Then we use the algorithm for the un-timed CSL until-operator, which is based
on solving linear systems of equations using iterative methods as Jacobi or Gauss-
Seidel, see [BHHK03].
    The implementation which extends the existing symbolic engine considers the
place py only implicitly. This means that the state space of the model will remain
as it is. The entries in the submatrices Ri : i≥1 and Ri,i+1 : i≥1 , representing higher
                                                                y
reward levels, must be computed additionally. Given l = ∆         + 2 as the number of
required reward levels, including the levels for [0, ∆) and [(y + 1)∆, ∞), we have
l times to consider the original rate matrix R and each time we have to shift
the related matrix positions (row and column indices) by the size of the state
space. In the i-th step, which means to consider reward level [i · ∆, (i + 1) · ∆),
we access the vector at the indices in the interval [i · |S|, (i + 1) · |S|) although
the on-the-fly matrix extraction computes the indices in [0, |S|). This behaviour
is realized by a special extraction policy, which we call MultiExtraction, as the
rate matrix will be computed in one step several times by shifting the row and
column indices. Obviously the used computation vectors have to be of dimension
|S| · l.
    These aspects are not directly affected by the actual reward bounds given in
the path formula. However, what does depend on the formula are: the set of states
for which transition firing is considered depending on specified step bounds, the
initialization of the argument vector, the size of the vector representing the exit
rates and the access to it depend on the formula.
Exit rates. If for all states s the exit rates of the states (s, i) are unique for all
0 ≤ i ≤ l, we can redirect the access to an exit rate independently of the reward
level to the related exit rate of reward level R0 . This requires a translation of
the actual index, but the exit rate vector has only dimension |S|.
Vector initialization. We use the method described in [KKNP01] which allows
                     A
to compute P robC (ΦU Ψ ) in one pass. Thus the argument vector v is initialized
in each position with the probability of the related state to reach a Ψ -state,
which is 1 for all Ψ -states and 0 otherwise. The adaption to the different cases
of reward intervals is given in Table 2.
Restricting the state transition relation. According to Section 3 we encode
the level-dependent CSL formulas given in Table 1 implicitly into the transition
relation of the net N A . As the enabledness of a transition will now depend
not only on the actual system state but also on the current reward level, we
decorate transitions of the net internally with step bounds, in which they are
allowed to fire. For the specification of the restricted transition relation we have
to distinguish basically the following two cases:

 1. We are dealing with a homogeneity in the reward dimension (J = [0, y)) or
    consider a single time point I = [t, t]. This represents the simple case. Ho-
    mogeneity in the reward dimension allows to fire all transitions independent
    of the reward bound and the absorbing states are derived solely from the
    time interval I. If the latter specifies a time point, the situation is similar,
    as only (¬Φ)-states are made absorbing. In this case the vector storing the
    exit rates can have dimension |S|.
 2. If J 6= [0, y] and I 6= [t, t] we have to consider an inhomogeneous model
    possibly in both dimensions. We have now to consider at least the lower
    reward bound when specifying the absorbing states. In this case the exit
    rate vector has the size of the implicit state space. Further the inhomogeneity
    must be encoded into the transition relation as follows:
    (a) all transition fire in (Φ ∧ ¬Ψ )-states for all reward levels
    (b) reward transitions fire in (¬Φ ∧ Ψ )-states only until reaching the lower
                                          y
        reward level bound, which is ∆
    (c) all transitions fire in (Φ ∧ Ψ )-states only until reaching the lower reward
                                 y
        level bound, which is ∆
    (d) (¬Φ ∧ ¬Ψ )-states are absorbing.


5    Related Work
We could provide a long list of analysis and model checking tools for CTMCs
or SPN but we are not aware of a CSRL model checker for MRM, stochastic re-
ward nets or a comparable high-level formalism. The probabilistic model checker
PRISM [KNP11] offers CSL model checking and allows to augment models with
reward structures but only supports their analysis concerning expected instan-
taneous and cumulative measures. The Markov Reward Model Checker MRMC
                                                [0,τ ]        [0,τ ]
[KZH+ 09] extends CSL by the path formulas X[0,y] Φ and ΦU[0,y] Ψ . It deploys
the path exploration by Qureshi and Sanders [QS96] and the discretization by
Veldman [TV00] for the computation of the joint distribution of time and re-
ward. It also supports impulse rewards. The results presented in [Clo06] base
upon a proprietary implementation.


6    Experimental Results
We now present some experimental results for the FSM system [CT93]. All
computational experiments were done on a 8 × 2.26 Mac Pro workstation with
32GB RAM running Cent OS 5.5. We restricted all experiments to a runtime
of at most 10 hours. For the experiments we considered a very simple reward
structure, which assigns to states satisfying P 3s > 0 a reward of 1.
    We made the following experiments:
                                                                         [0,2]
1. We compared the implicit implementation for the formula P≥0 [F[0,1] P 12 >
   0] with the explicit approach using the SPN N C and the related level-
   dependent CSL formula (compare Table 1) concerning runtime and mem-
   ory consumption. We used the FMS system with N = 6 and increased the
   accuracy by changing the number of reward levels. The experiments were
   performed single-threaded and multi-threaded with eight threads. Figure 3
   shows memory consumption and runtime. To judge the results we used for
   the explicit approach also the CSL model checker of the PRISM tool.
2. We compared the implicit implementation with the methods supported in
                                            [0,2]
   MRMC using the CSRL formula P≥0 [F[0,1] P 12 > 0]. Table 3 compares the
   computed probabilities for different accuracy settings. The needed time to
   check the formula for different accuracy settings and different model sizes is
   given in Table 5. For comparison, for N = 3 and d = 0.05 MRMC requires
   nearly 95 hours using discretization. However compared to the Markovian
   approximation the memory consumption of MRMC is not noteworthy.
As expected [CH06,Clo06] the Markovian approximation outperforms the meth-
ods supported by MRMC. The results in Table 3 show that we can achieve a
good approximation with a moderate number of reward levels. Our implemen-
tation shows, that on current workstations CSRL model checking is feasible for
systems with millions of states. We can see that the direct implementation has
a lower memory consumption as it omits the redundant storage of exit rates.
Multi-threading reduces the runtime in both cases significantly. Experimental
results for further case studies are given in [Sch12].


                    MARCIE                                        MRMC
            Markovian Approximation               Discretization        Path exploration
 N ∆ = 0.05 ∆ = 0.025 ∆ = 0.0125 ∆ = 0.00625 d = 0.05 d = 0.025 w = 0.0001 w = 0.00001
 1 0.0030576 0.0030546 0.0030529     0.0030521 0.0025259 0.0027855 0.00304942 0.00304942
 2 0.0088666 0.0088535 0.0088463     0.0088426 0.0074412 0.0081340           −           −
 3 0.0142780 0.0142573 0.0142461     0.0142403         −          −          −           −
                  – means that the experiment was canceled after 10 hours
Table 3. Comparison of the results using MARCIE and MRMC for different levels of
accuracy and different initial states.




                 N      | Sorg |    | S10 |       | S100 |       | S1000 |
                  1          54        648          5,508          54,108
                  3       6,520     78,240       665,040       6,533,040
                  5    152,712 1,832,544      15,576,624     153,017,424
                  7 1,639,440 19,673,280 167,222,880 1,642,718,880
                  9 11,058,190 132,698,280 1,127,935,380 11,080,306,380
Table 4. Size of the reachability graphs for different configurations for the SPN model
of the FMS system. The model is scalable concerning the initial marking of the places
P 1, P 2, P 12 and P 3. The column | Sorg | contains the state space size of the original
model. The columns | Sn | show the number of states for the derived CTMCs, where
      y
n= ∆    . In our experiments we set y to 1, thus | S1000 | represents an approximation
where the rewards increases in each step with ∆y = 0.001.




7    Conclusions
In this paper we presented an efficient implementation of a model checker for
CSRL and Markov reward models specified by means of stochastic reward nets.
We sketched some specific aspects to implement the described techniques in our
symbolic and multi-threaded model checker MARCIE. A case study has been
used to empirically illustrate its capabilities concerning manageable state space.
We are not aware of a comparable implementation of CSRL model checking in
any other public tool.
                                N ∆ = 0.05 ∆ = 0.025 ∆ = 0.0125 ∆ = 0.00625
                                 1     ≪ 1s       ≪ 1s         ≪ 1s        ≪ 1s
                                 3       1s          2s          10s          29s
                                 5      30s       1m7s       3m37s        12m1s
                                 7 6m09s 13m27s             42m29s     162m37s
                                 9 40m18s 109m32s 353m50s                      −
                                       – memory limit of 31 GB reached
                    Table 5. Total runtime for different values of ∆ and different model sizes.


                                                  FMS N=6                                                                                                 FMS N=6
                1.2e+07                                                                                                       700
                                                    PRISM (min 260268,max 8.88082e+06)                                                                                    PRISM (min 0,max 687)
                                             1 thread implicit (min 320300,max 8.2608e+06)                                                                       1 thread implicit (min 0,max 567)
                                           1 thread explicit (min 398052,max 1.09855e+07)                                                                        1 thread explicit (min 0,max 540)
                                          8 threads implicit (min 323032,max 8.26352e+06)                                     600                               8 threads implicit (min 0,max 151)
                 1e+07                    8 threads explicit (min 401792,max 1.09924e+07)                                                                       8 threads explicit (min 0,max 101)


                                                                                                                              500




                                                                                                   Total runtime in seconds
                 8e+06
 Memory in KB




                                                                                                                              400

                 6e+06

                                                                                                                              300

                 4e+06
                                                                                                                              200


                 2e+06
                                                                                                                              100



                     0                                                                                                         0
                          0   100   200        300          400          500          600    700                                    0   100   200   300             400        500           600     700
                                                Reward levels                                                                                        Reward levels




Fig. 3. Comparison of explicit and an implicit realization of the Markovian approxi-
mation for the FMS system with N = 6 with varying reward levels.


    Currently the CSRL model checker is restricted to stochastic reward nets
defining rate rewards. In the future we want to incorporate also impulse rewards.
The Markovian approximation increases the memory consumption significantly
                                                      y
as the computation vectors have to be of size |S| · ( ∆ + 2). We are convinced that
the implicit representation of the underlying QBD allows an efficient application
of so-called “out-of-core” methods, which store the complete vector on hard disc
while providing fragments to the algorithms which will be used next.
    We will explore the use of our tool in further case studies. MARCIE is avail-
able for noncommercial use; we provide statically linked binaries for Mac OS X
and Linux. The tool, the manual and a benchmark suite can be found on our
website
http://www-dssz.informatik.tucottbus.de/DSSZ/Software/Marcie.


References
[ASSB00] A. Aziz, K. Sanwal, V. Singhal, and R. Brayton. Model checking continuous
          time Markov chains. ACM Trans. on Computational Logic, 1(1), 2000.
[BCH+ 10] Christel Baier, Lucia Cloth, Boudewijn R. Haverkort, Holger Hermanns,
          and Joost-Pieter Katoen. Performability assessment by model checking of
          markov reward models. Formal Methods in System Design, 36(1):1–36, 2010.
[BHHK00] Christel Baier, Boudewijn Haverkort, Holger Hermanns, and Joost-Pieter
          Katoen. On the logical characterisation of performability properties. In Au-
          tomata, Languages and Programming, volume LNCS 1853 of Lecture Notes
          in Computer Science, pages 780–792. Springer-Verlag, 2000.
[BHHK03] C. Baier, B. Haverkort, H. Hermanns, and J.-P. Katoen. Model-checking
          algorithms for continuous-time markov chains. IEEE Trans. on Software
          Engineering, 29(6), 2003.
[CBC+ 93] G. Ciardo, A. Blakemore, P. F. Chimento, J. K. Muppala, and K. S. Trivedi.
          Automated generation and analysis of markov reward models using stochas-
          tic reward nets. IMA Volumes in Mathematics and its Applications: Linear
          Algebra, Markov Chains, and Queueing Models / Meyer, C.; Plemmons,
          R.J, 48:145–191, 1993.
[CES86] Edmund M. Clarke, E. Allen Emerson, and A. Prasad Sistla. Automatic
          verification of finite-state concurrent systems using temporal logic specifi-
          cations. ACM Trans. Program. Lang. Syst., 8(2):244–263, 1986.
[CH06]    Lucia Cloth and Boudewijn R. Haverkort. Five performability algorithms.
          a comparison. In MAM 2006: Markov Anniversary Meeting, pages 39–54,
          Raleigh, NC, USA, June 2006. Boson Books.
[Clo06]   L. Cloth. Model Checking Algorithms for Markov Reward Models. PhD
          thesis, University of Twente, 2006.
[CT93]    G. Ciardo and K. S. Trivedi. A Decomposition Approach for Stochastic
          Reward Net Models. Performance Evaluation, 18(1):37–59, 1993.
[HCHK02] Boudewijn R. Haverkort, Lucia Cloth, Holger Hermanns, and Joost-Pieter
          Katoen. Model checking performability properties. In Proceedings of the
          2002 International Conference on Dependable Systems and Networks, DSN
          ’02, pages 103–112, Washington, DC, USA, 2002. IEEE Computer Society.
[HRSS10] M. Heiner, C. Rohr, M. Schwarick, and S. Streif. A comparative study of
          stochastic analysis techniques. In Proc. CMSB 2010, pages 96–106. ACM,
          2010.
[KKNP01] J.-P. Katoen, M. Kwiatkowska, G. Norman, and D. Parker. Faster and
          symbolic CTMC model checking. In L. de Alfaro and S. Gilmore, editors,
          Proc. 1st Joint International Workshop on Process Algebra and Probabilistic
          Methods, Performance Modeling and Verification (PAPM/PROBMIV’01),
          volume 2165 of LNCS, pages 23–38. Springer, 2001.
[KNP07] M. Kwiatkowska, G. Norman, and D. Parker. Stochastic model checking.
          In M. Bernardo and J. Hillston, editors, Formal Methods for the Design
          of Computer, Communication and Software Systems: Performance Evalu-
          ation (SFM’07), volume 4486 of LNCS (Tutorial Volume), pages 220–270.
          Springer, 2007.
[KNP11] M. Kwiatkowska, G. Norman, and D. Parker. PRISM 4.0: Verification of
          probabilistic real-time systems. In G. Gopalakrishnan and S. Qadeer, edi-
          tors, Proc. 23rd International Conference on Computer Aided Verification
          (CAV’11), volume 6806 of LNCS, pages 585–591. Springer, 2011.
[KZH+ 09] J.-P. Katoen, I. S. Zapreev, E. M. Hahn, H. Hermanns, and D. N. Jansen.
          The Ins and Outs of The Probabilistic Model Checker MRMC. In Quan-
          titative Evaluation of Systems (QEST), pages 167–176. IEEE Computer
          Society, 2009. www.mrmc-tool.org.
[QS96]    M. A. Qureshi and W. H. Sanders. A new methodology for calculating
          distributions of reward accumulated during a finite interval. In International
          Symposium on Fault-Tolerant Computing, pages 116–125, Sendai, Japan,
          1996.
[RHC05] Anne Remke, Boudewijn R. Haverkort, and Lucia Cloth. Model checking
          infinite-state markov chains. In TACAS, pages 237–252, 2005.
[Sch12]   M Schwarick. Symbolic model checking of stochastic reward nets. Technical
          Report 05-12, Brandenburg University of Technology Cottbus, Department
          of Computer Science, June 2012.
[SH09]    M. Schwarick and M. Heiner. CSL model checking of biochemical networks
          with interval decision diagrams. In Proc. CMSB 2009, pages 296–312. LNC-
          S/LNBI 5688, Springer, 2009.
[SRH11]   M Schwarick, C Rohr, and M Heiner. MARCIE - Model checking And
          Reachability analysis done effiCIEntly. In Proc. 8th QEST, pages 91 – 100.
          IEEE CS Press, September 2011.
[ST10]    M. Schwarick and A. Tovchigrechko. IDD-based model validation of bio-
          chemical networks. TCS 412, pages 2884–2908, 2010.
[Ste94]   W.J. Stewart. Introduction to the Numerical Solution of Markov Chains.
          Princeton Univ. Press, 1994.
[TV00]    H. C. Tijms and R. Veldman. A fast algorithm for the transient reward dis-
          tribution in continuous-time Markov chains. In Oper. Res. Lett., volume 26,
          pages 155–158, 2000.