=Paper= {{Paper |id=None |storemode=property |title=Generalization Strategies for the Verification of Infinite State Systems |pdfUrl=https://ceur-ws.org/Vol-598/paper06.pdf |volume=Vol-598 |dblpUrl=https://dblp.org/rec/conf/cilc/FioravantiPPS10 }} ==Generalization Strategies for the Verification of Infinite State Systems== https://ceur-ws.org/Vol-598/paper06.pdf
            Generalization Strategies for the
          Verification of Infinite State Systems

Fabio Fioravanti1 , Alberto Pettorossi2 , Maurizio Proietti3 , and Valerio Senni2
              1
               Dipartimento di Scienze, University ‘G. D’Annunzio’,
                      Viale Pindaro 42, I-65127 Pescara, Italy
                            fioravanti@sci.unich.it
2
  DISP, University of Rome Tor Vergata, Via del Politecnico 1, I-00133 Rome, Italy
                     {pettorossi,senni}@disp.uniroma2.it
              3
                 IASI-CNR, Viale Manzoni 30, I-00185 Rome, Italy
                         maurizio.proietti@iasi.cnr.it



      Abstract. We present a comparative evaluation of some generalization
      strategies which are applied by a method for the automated verification
      of infinite state reactive systems. The verification method is based on
      (1) the specialization of the constraint logic program which encodes the
      system with respect to the initial state and the property to be verified,
      and (2) a bottom-up evaluation of the specialized program. The gen-
      eralization strategies are used during the program specialization phase
      for controlling when and how to perform generalization. Selecting a good
      generalization strategy is not a trivial task because it must guarantee the
      termination of the specialization phase itself, and it should be a good bal-
      ance between precision and performance. Indeed, a coarse generalization
      strategy may prevent one to prove the properties of interest, while an
      unnecessarily precise strategy may lead to high verification times. We
      perform an experimental evaluation of various generalization strategies
      on several infinite state systems and properties to be verified.


1   Introduction

One of the most challenging problems in the verification of reactive systems, is
the extension of the model checking technique (see [9] for a thorough overview)
to infinite state systems. In model checking the evolution over time of an infinite
state system is modelled as a binary transition relation over an infinite set of
states (such as n-tuples of integers or rationals) and the properties of that evo-
lution are specified by means of propositional temporal formulas. In particular,
in this paper we consider the Computation Tree Logic (CTL, for short), which is
a branching time propositional temporal logic by which one can specify, among
others, the so-called safety and liveness properties [9].
    Unfortunately, when considering infinite state systems, the verification of
CTL formulas is an undecidable problem, in general. In order to cope with
this limitation, various decidable subclasses of systems and formulas have been
identified (see, for instance, [1,14]). Other approaches enhance finite state model
checking by using more general deductive techniques (see, for instance, [30,33])
or using abstractions, that is, mappings by which one can reduce an infinite
state system to a finite state system, preserving the property of interest (see, for
instance, [2,35]).
   Also logic programming and constraint logic programming have been pro-
posed as frameworks for specifying and verifying properties of reactive systems.
Indeed, the fixpoint semantics of logic programming languages allows us to easily
represent the fixpoint semantics of various temporal logics [13,27,31]. Moreover,
constraints over the integers or the rationals can be used for providing finite
representations of infinite sets of states [13,17].
    However, for programs which specify infinite state systems, the proof pro-
cedures normally used in constraint logic programming (CLP), such as the ex-
tension to CLP of SLDNF resolution or tabled resolution [8], very often diverge
when trying to check some given temporal properties. This is due to the limited
ability of these proof procedures to cope with infinitely failed derivations. For
this reason, instead of using direct program evaluation, many logic programming-
based verification methods make use of reasoning techniques such as: (i) static
analysis [5,13] or (ii) program transformation [15,23,25,28,32].
   In this paper we further develop the verification method presented in [15]
and we assess its practical value. That method is applicable to specifications of
CTL properties of infinite state systems encoded as constraint logic programs
and it makes use of program specialization.
    The specific contributions of this paper are the following. First, we have
reformulated the specialization-based verification method of [15] as a two-phase
method. Phase (1): the CLP specification is specialized w.r.t. the initial state of
the system and the temporal property to be verified, and Phase (2): a bottom-up
evaluation of the specialized program is performed. By doing so we are able to
better evaluate the role of program specialization in the verification technique.
    Then, we have defined various generalization strategies which can be used
during Phase (1) of our verification method, for controlling when and how to
perform generalization. Selecting a good generalization strategy is not a trivial
task: it must guarantee the termination of the specialization phase itself, by
introducing a finite number of specialization sub-problems, and it should provide
a good balance between precision and performance. Indeed, the use of a too
coarse generalization strategy may prevent one to prove the properties of interest,
while an unnecessarily precise strategy may lead to verification times which
are too high. The generalization strategies we consider in this paper have been
specifically designed for CLP programs using the constraint domain of linear
inequations over rationals.
    We have implemented these strategies on the MAP transformation system [26],
and we have applied them to several infinite state systems and properties taken
from the literature. We have performed a comparative evaluation of generaliza-
tion strategies in terms of efficiency and power, based on the analysis of the
experimental results.

                                         2
    Finally, we have compared our MAP implementation with various constraint-
based model checkers for infinite state systems and, in particular, with ALV [7],
DMC [13], and HyTech [19].
    The paper is organized as follows. In Section 2 we recall how CTL properties
of infinite state systems can be encoded by using locally stratified CLP programs.
In Section 3 we present our two-phase method for verification. In Section 4 we
describe various strategies that can be applied during the specialization phase.
These strategies differ for the generalization techniques used for ensuring the
termination of the program specialization phase. In Section 5 we report on some
experiments we have performed by using a prototype implementation of the
MAP transformation system. Finally, we compare the results we have obtained
using the MAP system with the results we have obtained using other verification
systems.

2    Specifying CTL Properties by CLP Programs
We will model an infinite state system as a Kripke structure and we will represent
a property to be verified as a formula of the Computation Tree Logic (CTL, for
short). The fact that a CTL formula ϕ holds in a state s of a Kripke structure K
will be denoted by K,s |= ϕ. (The reader who is not familiar with these notions
may refer to the Appendix or to [9].) A Kripke structure can be encoded as a
constraint logic program as indicated in the following four points [15,25,27].
(1) The set S of states is given as a set of n-tuples of the form ht1 , . . . , tn i, where
for i = 1, . . . , n, the term ti is either a rational number or an element of a finite
domain. For reasons of simplicity, when denoting a state we will feel free to use
a single variable X, instead of an n-tuple of variables of the form hX1 , . . . , Xn i.
(2) The set I of initial states is given as a set of clauses of the form:
    initial (X) ← c(X), where c(X) is a constraint.
(3) The transition relation R is given as a set of clauses of the form:
    t(X, Y ) ← c(X, Y )
where c(X, Y ) is a constraint. Y is called a successor state of X. We also define
a predicate ts such that, for every state X, ts(X, Ys) holds iff Ys is a list of all
the successor states of X, that is, for every state X, the state Y belongs to the
list Ys iff t(X, Y ) holds. We refer to [16] for conditions that guarantee that Ys
is a finite list and for an algorithm to construct the clauses defining ts from the
clauses defining t.
(4) The elementary properties which are associated with each state X by the
labeling function L, are given by a set of clauses of the form:
    elem(X, e) ← c(X)
where e is a constant, that is, the name of an elementary property, and c(X) is
a constraint.
    Given a Kripke structure K encoded by the clauses defining the predicates
initial , t, ts, and elem, the satisfaction relation |= can be encoded by a predicate
sat defined by the following clauses [15,25,27]:
       1. sat(X, F ) ← elem(X, F )
       2. sat(X, not(F )) ← ¬sat(X, F )


                                            3
     3. sat(X, and (F1 , F2 )) ← sat(X, F1 ), sat(X, F2 )
     4. sat(X, ex (F )) ← t(X, Y ), sat(Y, F )
     5. sat(X, eu(F1 , F2 )) ← sat(X, F2 )
     6. sat(X, eu(F1 , F2 )) ← sat(X, F1 ), t(X, Y ), sat(Y, eu(F1 , F2 ))
     7. sat(X, af (F )) ← sat(X, F )
     8. sat(X, af (F )) ← ts(X, Ys), sat all (Ys, af (F ))
     9. sat all ([ ], F ) ←
    10. sat all ([X|Xs], F ) ← sat(X, F ), sat all (Xs, F )
Let PK denote the constraint logic program consisting of clauses 1–10 together
with the clauses defining the predicates initial , t, ts, and elem.
    Now we will present a method for proving that a given CTL formula ϕ holds.
We start by defining a new predicate prop as follows:
    prop ≡def ∀X(initial (X) → sat(X, ϕ))
This definition can be encoded by the following two clauses:
    γ1 : prop ← ¬negprop
    γ2 : negprop ← initial (X), sat(X, not(ϕ))
The correctness of this encoding is stated by the following Theorem 1 and it is
a consequence of the fact that program PK ∪ {γ1 , γ2 } is locally stratified and,
hence, it has a unique perfect model M (PK ∪ {γ1 , γ2 }) [4]. The proof proceeds
by structural induction on the CTL formula ϕ and is omitted for lack of space
(see [16] for details).
Theorem 1 (Correctness of Encoding). Let K be a Kripke structure, let I
be the set of initial states of K, and let ϕ be a CTL formula. Then,
    (for all states s ∈ I, K, s |= ϕ) iff prop ∈ M (PK ∪ {γ1 , γ2 }).
Example 1. Let us consider the reactive system depicted in Figure 1, where a
term of the form s(X1 , X2 ) denotes the pair hX1 , X2 i of rationals.
                       ''$ $
             X := X −1
                        W  X := X +1
               2     2                              2     2
                                  s(X1 ,X2)

                            &&%
                               %
                         X1 ≥ 1                X1 ≤ 2
Fig. 1. A reactive system. In any initial state we have that X1 ≤ 0 and X2 = 0. No
transition changes the value of X1 .

The Kripke structure K which models that system, is defined as follows. The
initial states are given by the clause:
    11. initial (s(X1 , X2 )) ← X1 ≤ 0, X2 = 0
The transition relation R is given by the clauses:
    12. t(s(X1 , X2 ), s(Y1 , Y2 )) ← X1 ≥ 1, Y1 = X1 , Y2 = X2 −1
    13. t(s(X1 , X2 ), s(Y1 , Y2 )) ← X1 ≤ 2, Y1 = X1 , Y2 = X2 +1
The elementary property negative is given by the clause:
    14. elem(s(X1 , X2 ), negative) ← X2 < 0
Let PK denote the program consisting of clauses 1–14. We omit the clauses
defining the predicate ts, which are not needed in this example.

                                        4
    Suppose that we want to verify the following property: in every initial state
s(X1 , X2 ), where X1 ≤ 0 and X2 = 0, the CTL formula not(eu(true, negative))
holds, that is, from any initial state it is impossible to reach a state s(X10 , X20 )
where X20 < 0. By using the fact that not(not(ϕ)) is equivalent to ϕ, this property
is encoded by the following two clauses:
    γ1 : prop ← ¬negprop
    γ2 : negprop ← X1 ≤ 0, X2 = 0, sat(s(X1 , X2 ), eu(true, negative))
Thus, by Theorem 1, in order to verify that in every initial state of K the CTL
formula not(eu(true, negative)) holds, we need to show that prop ∈ M (PK ∪
{γ1 , γ2 }).                                                                       
    One of the most important features of the model checking techniques for
finite state systems is the ability to find witnesses and counterexamples [9]. Our
encoding of the Kripke structure can easily be extended to handle the generation
of witnesses of formulas of the form eu(ϕ1 , ϕ2 ) and counterexamples of formulas
of the form af (ϕ). For details, the interested reader may refer to Section 7 of [16].


3    Verifying Infinite State Systems by Specializing CLP
     Programs
In this section we present a method for checking whether or not prop ∈ M (PK ∪
{γ1 , γ2 }), where PK ∪ {γ1 , γ2 } is a CLP specification of an infinite state system
and prop is a predicate encoding the satisfiability of a given CTL formula, as
indicated in Section 2.
    The checking technique based on the bottom-up construction of the perfect
model M (PK ∪ {γ1 , γ2 }) is not feasible, in general, because this model may
be infinite. Moreover, the proof procedures normally used in constraint logic
programming, such as the extension of SLDNF resolution and tabled resolution
to CLP, very often diverge when trying to check whether or not prop ∈ M (PK ∪
{γ1 , γ2 }). This is due to the limited ability of these proof procedures to cope
with infinite failure.
    It has been shown in [15] that program specialization often improves the
treatment of infinite failure. In this section we present a reformulation of the
method proposed in [15] by defining a verification algorithm working in two
phases: (1) in the first phase we specialize the program PK ∪ {γ1 , γ2 } w.r.t. the
query prop, that is, w.r.t. initial state of the system and the temporal property
to be verified, and (2) in the second phase we construct the perfect model of the
specialized program by a bottom-up evaluation. This separation into two phases
allows the assessment of the effect of program specialization on the verification
process as indicated in Section 5.
    It is often the case that the specialization phase improves the termination
of the construction of the perfect model. Indeed, in many cases the bottom-
up construction of the perfect model of the program PK ∪ {γ1 , γ2 } does not
terminate because it does not take into account the information about the query
to be evaluated and, in particular, about the initial states of the system. The
specialization phase modifies the initial program PK ∪ {γ1 , γ2 } by incorporating

                                          5
into the specialized program Ps the knowledge about the constraints that hold in
the initial states. Therefore, the bottom-up construction of the perfect model of
Ps may exploit those constraints and terminate more often than the bottom-up
construction of the perfect model of the initial program PK ∪ {γ1 , γ2 }.

The Verification Algorithm
Input: The program PK ∪ {γ1 , γ2 }.
Output: An Herbrand interpretation Ms such that prop ∈ M (PK ∪ {γ1 , γ2 }) iff
prop ∈ Ms .
(Phase 1) Specialize(PK ∪ {γ1 , γ2 }, Ps );
(Phase 2) BottomUp(Ps , Ms )
    The Specialize procedure of Phase (1) implements a program specialization
strategy which makes use of the following transformation rules only: definition
introduction, constrained atomic folding, positive unfolding, constrained atomic
folding, removal of clauses with unsatisfiable body, and removal of subsumed
clauses. Thus, Phase (1) is simpler than the specialization strategy presented
in [15] which uses some extra rules such as negative unfolding, removal of useless
clauses, and contextual constraint replacement.

Procedure Specialize
Input: The program PK ∪ {γ1 , γ2 }.
Output: A stratified program Ps such that prop ∈ M (PK ∪ {γ1 , γ2 }) iff prop ∈
M (Ps ).
Ps := {γ1 }; InDefs := {γ2 }; Defs := {};
while there exists a clause γ in InDefs
do Unfold (γ, Γ );
     Generalize&Fold(Defs, Γ, NewDefs, Φ);
     Ps := Ps ∪ Φ;
     InDefs := (InDefs − {γ}) ∪ NewDefs; Defs := Defs ∪ NewDefs;
end-while

The Unfold procedure takes as input a clause γ ∈ InDefs of the form H ←
c(X), sat(X, ψ), and returns as output a set Γ of clauses derived from γ as
follows. The Unfold procedure first unfolds once γ w.r.t. sat(X, ψ) and then
applies zero or more times the unfolding as long as in the body of a clause
derived from γ there is an atom of one of the following forms: (i) t(s1 , s2 ),
(ii) ts(s, ss), (iii) sat(s, e), where e is an elementary property, (iv) sat(s, not(ψ1 )),
(v) sat(s, and (ψ1 , ψ2 )), (vi) sat(s, ex (ψ1 )), and (vii) sat all (ss, ψ1 ), where ss
is a non-variable list. Then the set of clauses derived from γ by applying the
unfolding rule is simplified by removing: (i) every clause whose body contains
an unsatisfiable constraint, and (ii) every clause which is subsumed a clause of
the form H ← c, where c is a constraint. Due to the structure of the clauses
defining the predicates t, ts, sat, and sat all , the Unfold procedure terminates
for any ground term denoting a CTL formula ψ occurring in γ.


                                            6
    The Generalize&Fold procedure takes as input the set Γ of clauses produced
by the Unfold procedure and the set Defs of clauses, called definitions. A defi-
nition in Defs is a clause of the form newp(X) ← d(X), sat(X, ψ) which can be
used for folding. The Generalize&Fold procedure introduces a set NewDefs of
new definitions (which are then added to Defs) and, by folding the clauses in Γ
using the definitions in Defs ∪ NewDefs, derives a new set Φ of clauses, called
specialized clauses, which are added to the program Ps . The specialized clauses
in Φ are derived as follows. Suppose that η: H ← e, L1 , . . . , Ln is a clause in Γ ,
where for i = 1, . . . , n, Li is of the form either sat(X, ψi ) or ¬sat(X, ψi ). For
i = 1, . . . , n, if there exists a definition δi : newp(X) ← di (X), sat(X, ψi ) in Defs
such that e implies di (X), then γ is folded using δi , that is, sat(X, ψi ) is replaced
by newp(X), else a new definition νi : newp(X) ← g(X), sat(X, ψi ), such that e
implies g(X), is added to NewDefs and γ is folded using νi . More details on the
Generalize&Fold procedure will be given in the next section.
    An uncontrolled application of the Generalize&Fold procedure may lead to
the introduction of infinitely many new definitions and, therefore, it may de-
termine the non-termination of the Specialize procedure. In order to guaran-
tee termination, we will extend to constraint logic programs some techniques
which have been proposed for controlling generalization in positive supercompi-
lation [34] and partial deduction [22,24].
    The output program Ps of the Specialize procedure is a stratified program and
the procedure BottomUp computes the perfect model Ms of Ps by considering a
stratum at a time, starting from the lowest stratum and going up to the highest
stratum of Ps (see, for instance, [4]). Obviously, the model Ms may be infinite
and the BottomUp procedure may not terminate. In order to get a terminating
procedure, we could compute an approximation of Ms by applying some standard
techniques which are used in the static analysis of programs [10]. Indeed, in order
to prove that prop ∈ Ms , we could construct a set A ⊆ Ms such that prop ∈ A.
Approximations (also called abstractions) are often used for the verification of
infinite state systems (see, for instance, [2,5,13,35]). In this paper, however, we
will not study these techniques based on approximations and we will focus our
attention on the design of the Specialize procedure only. In Section 5 we show
that the construction of the model Ms terminates in several significant cases.

Example 2. Let us consider the reactive system K of Example 1. We want to
check whether or not prop ∈ M (PK ∪ {γ1 , γ2 }), where prop expresses the fact
that not(eu(true, negative)) holds in every initial state.
    Now we have that: (i) by using a traditional Prolog system, the evaluation of
the query prop does not terminate in the program PK ∪{γ1 , γ2 }, because negprop
has an infinitely failed SLD tree, (ii) by using the XSB logic programming system
which uses tabled resolution, the query prop does not terminate, and (iii) the
bottom-up construction of M (PK ∪ {γ1 , γ2 }) does not terminate (even if we
restrict the program to clauses 1, 2, 5, 6, 11–14, which are the clauses actually
needed for evaluating the query prop).
    In our verification algorithm we overcome those difficulties encountered by
traditional Prolog systems and XSB by applying the Specialize procedure to the


                                           7
program PK ∪ {γ1 , γ2 } (with a suitable generalization strategy, as illustrated in
the next section). By doing so, we derive the following specialized program Ps :
   prop ← ¬negprop
   negprop ← X1 ≤ 0, X2 = 0, new 1(X1 , X2 )
   new 1(X1 , X2 ) ← X1 ≤ 0, X2 = 0, Y1 = X1 , Y2 = 1, new 2(Y1 , Y2 )
   new 2(X1 , X2 ) ← X1 ≤ 0, X2 ≥ 0, Y1 = X1 , Y2 = X2 + 1, new 2(Y1 , Y2 )
The BottomUp procedure computes the perfect model of Ps , which is Ms =
{prop}, in a finite number of steps. Thus, the property not(eu(true, negative))
holds in every initial state of K.                                               

4     Generalization Strategies
The design of a powerful generalization strategy should meet two conflicting
requirements: (i) the strategy should enforce the termination of the Specialize
procedure, and (ii) over-generalization may produce a specialized program Ps
with an infinite perfect model which may cause the non-termination of the Bot-
tomUp procedure. In this section we present several generalization strategies for
coping with those conflicting requirements. These strategies combine various by
now standard techniques used in the fields of program transformation and static
analysis, such as well-quasi orderings, widening, and convex hull operators, and
variants thereof. All these strategies guarantee the termination of the Special-
ize procedure. However, as we deal with an undecidable verification problem,
the power and effectiveness of the various generalization strategies can only be
assessed by an experimental evaluation, which will be presented in the next
section.

4.1    The Generalize&Fold Procedure
The Generalize&Fold procedure makes use of a tree, called Definition Tree,
whose root is labelled by clause γ2 (recall that {γ2 } is the initial value of InDefs)
and the non-root nodes are labelled by the clauses in Defs. Since, by construction,
the clauses in Defs are all distinct, we will identify each clause with a node of
that tree. The children of a clause γ in Defs are the clauses NewDefs derived by
applying Unfold (γ, Γ ) followed by Generalize&Fold(Defs, Γ, NewDefs, Φ).
    Similarly to [22,24,34], our generalization technique is based on the com-
bined use of well-quasi ordering relations and clause generalization operators.
The well-quasi orderings guarantee that generalization is eventually applied,
while generalization operators guarantee that each definition can be generalized
a finite number of times only.
    Let C be the set of all constraints and D be a fixed interpretation for the
constraints in C. We assume that: (i) every constraint in C is either an atomic
constraint or a finite conjunction of constraints (we will denote conjunction by
comma), and (ii) C is closed under projection, where the projection of a con-
straint c w.r.t. the variable X is a constraint, denoted project(c, X), such that
D |= ∀(project(c, X) ↔ ∃Xc). We define a partial order v on C as follows: for
any two constraints c1 and c2 in C, we have that c1 v c2 iff D |= ∀ (c1 → c2 ).

                                          8
Definition 1 (Well-Quasi Ordering). A well-quasi ordering (wqo, for short)
on a set S is a reflexive, transitive, binary relation - such that, for every infinite
sequence e0 , e1 , . . . of elements of S, there exist i and j such that i < j and
ei - ej . Given e1 and e2 in S, we write e1 ≈ e2 if e1 - e2 and e2 - e1 . We say
that a wqo - is thin iff for all e ∈ S, the set {e0 ∈ S | e ≈ e0 } is finite.
Definition 2 (Generalization Operator). Let - be a wqo on C. A general-
ization operator on C w.r.t. the wqo -, is a binary operator such that, for all
constraints c and d in C, we have: (i) d v c d, and (ii) c d - c. (Note that,
in general, is not commutative.)
    Similarly to the widening operator ∇ used in abstract interpretations [10],
every infinite sequence of constraints constructed by using the generalization
operator eventually stabilizes, that is, for every infinite sequence d0 , d1 , . . . of
constraints, in the infinite sequence c0 , c1 , . . . defined as follows:
    c0 = d0    and       for any i ≥ 0, ci+1 = ci di+1 ,
there exist 0 ≤ m < n, such that cm = cn .

Procedure Generalize&Fold
Input: (i) a set Defs of definitions, and (ii) a set Γ of clauses obtained from a
clause γ by the Unfold procedure.
Output: (i) A set NewDefs of new definitions, and (ii) a set Φ of folded clauses.
NewDefs := ∅ ; Φ := Γ ;
while in Φ there exists a clause η: H ← e, G1 , L, G2 , where L is either sat(X, ψ)
or ¬sat(X, ψ) do
Generalize:
Let ep (X) be project(e, X).
1. if in Defs there exists a clause δ: newp(X) ← d(X), sat(X, ψ) such that
        ep (X) v d(X) (modulo variable renaming)
   then NewDefs := NewDefs
2. elseif there exists a clause α in Defs such that:
        (i) α is of the form newq(X) ← b(X), sat(X, ψ), and (ii) α is the most
        recent ancestor of γ in the Definition Tree such that b(X) - ep (X)
   then NewDefs := NewDefs ∪ {newp(X) ← b(X) ep (X), sat(X, ψ)}
3. else NewDefs := NewDefs ∪ {newp(X) ← ep (X), sat(X, ψ)}
Fold:
Φ := (Φ − {η}) ∪ {H ← e, G1 , M, G2 }, where M is newp(X), if L is sat(X, ψ),
      and M is ¬newp(X), if L is ¬sat(X, ψ)
end-while

   The following theorem, whose proof is given in [16], establishes that the
Specialize procedure, which uses the Unfold and the Generalize&Fold subproce-
dures, always terminates and preserves the perfect model semantics.
Theorem 2 (Termination and Correctness of the Specialize Procedure).
For every input program PK ∪{γ1 , γ2 }, for every thin wqo -, for every generaliza-
tion operator , the Specialize procedure terminates. If Ps is the output program
of the Specialize procedure, then prop ∈ M (PK ) iff prop ∈ M (Ps ).

                                          9
4.2     Well-Quasi Orderings and Generalization Operators on Linear
        Constraints
In this section we describe the wqo’s and the generalization operators we have
used in our verification experiments.
     We will consider the set Lin k of constraints defined as follows. The atomic
constraints of Lin k are linear inequations constructed by using the predicate
symbols < and ≤, the k distinct variables X1 , . . . , Xk , and the integer coef-
ficients qi ’s. The constraints in Lin k are interpreted over the rationals in the
usual way. Every constraint c ∈ Lin k is the conjunction a1 , . . . , am of m (≥ 0)
distinct atomic constraints and, for i = 1, . . . , m, (1) ai is of the form either
pi ≤ 0 or pi < 0, and (2) pi is a polynomial of the form q0 + q1 X1 + . . . qk Xk ,
where the qi ’s are integer coefficients. An equation r = s is considered as an
abbreviation of the conjunction of the two inequations r ≤ s and s ≤ r.
     In every infinite state system we will consider, the state is represented as an
n-tuple ht1 , . . . , tn i, where k terms, with k ≤ n, are rationals and the remaining
n−k terms are elements of a finite domain. As illustrated in Section 2, the initial
states and the elementary properties are specified by constraints in Lin k and the
transition relation is specified by constraints in Lin 2k . (The n−k components of
a state which are elements of a finite domain, are specified by their values.)
Well-Quasi Orderings. Now we present three wqo’s between the polynomials
and between the constraints on Lin k that we will use in the verification examples
of the next section.
(1) The wqo HomeoCoeff, denoted by -HC , compares sequences of absolute
values of integer coefficients occurring in polynomials. The -HC ordering is based
on the notion of a homeomorphic embedding (see, for instance, [22,24,34]) and
takes into account commutativity and associativity of addition and conjunction.
Given two polynomials with integer coefficients p1 =def q0 + q1 X1 + . . . + qk Xk ,
and p2 =def r0 + r1 X1 + . . . + rk Xk , we have that p1 -HC p2 iff there exist
a permutation h`0 , . . . , `k i of the indexes h0, . . . , ki such that, for i = 0, . . . , k,
|qi | ≤ |r`i |. Given two atomic constraints a1 =def p1 < 0 and a2 =def p2 < 0, we
have that a1 -HC a2 iff p1 -HC p2 . Similarly, if we consider a1 =def p1 ≤ 0 and
a2 =def p2 ≤ 0. Given two constraints c1 =def a1 , . . . , am , and c2 =def b1 , . . . , bn ,
we have that c1 -HC c2 iff there exist m distinct indexes `1 , . . . , `m , with m ≤ n,
such that ai -HC b`i , for i = 1, . . . , m.
(2) The wqo MaxCoeff, denoted by -MC , compares the maximum absolute value
of coefficients occurring in polynomials. For any atomic constraint ai of the form
p < 0 or p ≤ 0, where p is q0 + q1 X1 + . . . + qk Xk , we have that maxcoeff (ai ) =
max {|q0 |, |q1 |, . . . , |qk |}, and for any two atomic constraints a1 , a2 , we have that
a1 -MC a2 iff maxcoeff (a1 ) ≤ maxcoeff (a2 ). Given two constraints c1 =def
a1 , . . . , am , and c2 =def b1 , . . . , bn , we have that c1 -MC c2 iff, for i = 1, . . . , m,
there exists j ∈ {1, . . . , n} such that ai -MC bj .
(3) The wqo SumCoeff, denoted by -SC , compares the sum of the absolute values
of the coefficients occurring in polynomials. For any atomic constraint ai of the
form p < 0 or p ≤ 0, where p is q0 +q1 X1 +. . .+qk Xk , , we define sumcoeff (ai ) =
Pk
    j=0 |qj |, and for any two atomic constraints a1 , a2 , we define a1 -SC a2 iff


                                               10
sumcoeff (a1 ) ≤ sumcoeff (a2 ). Given two constraints c1 =def a1 , . . . , am , and
c2 =def b1 , . . . , bn , we define c1 -SC c2 iff, for i = 1, . . . , m, there exists
j ∈ {1, . . . , n} such that ai -SC bj .
Generalization Operators. Now we present some generalization operators
on Lin k which we use in the verification examples of the next section.
(G1) Given any two constraints c and d, the generalization operator Top, de-
noted T , returns true. It can be shown that T is indeed a generalization
operator w.r.t. the wqo’s HomeoCoeff, MaxCoeff, and SumCoeff.
(G2) Given any two constraints c =def a1 , . . . , am , and d, the generalization op-
erator Widen, denoted W , returns the conjunction ai1 , . . . , air , where {ai1 , . . . ,
air } = {ah | 1 ≤ h ≤ m and d v ah } (see [10] for a similar operator for static
program analysis). It can be shown that W is, indeed, a generalization operator
w.r.t. the wqo’s HomeoCoeff, MaxCoeff, and SumCoeff.
(G3) Given any two constraints c =def a1 , . . . , am , and d =def b1 , . . . , bn , the gen-
eralization operator WidenPlus, denoted WP , returns the conjunction ai1 , . . . ,
air , bj1 , . . . , bjs , where: (i) {ai1 , . . . , air } = {ah | 1 ≤ h ≤ m and d v ah }, and
(ii) {bj1 , . . . , bjs } = {bk | 1 ≤ k ≤ n and bk - c}. It can be shown that WP
is, indeed, a generalization operator w.r.t. the wqo’s MaxCoeff, and SumCoeff.
However, in general, WP is not a generalization operator w.r.t. HomeoCoeff,
because the constraint c WP d may contain more atomic constraints than c and,
thus, it may not be the case that (c WP d) -HC c.
     Some additional generalization operators can be defined by combining our
three operators T , W , and WP , with the convex hull operator, which is often
used in the static analysis of programs [10]. Given two constraints c and d, their
convex hull, denoted by ch(c, d), is the least constraint h (w.r.t. the v ordering)
such that c v h and d v h.
     Given any two constraints c and d, a wqo -, and a generalization operator ,
we define the generalization operator CH as follows: c CH d =def c ch(c, d).
From the definitions of             and ch one can easily derive that the operator CH
is indeed a generalization operator for c and d, that is, (i) d v c CH d, and
(ii) c CH d - c.
(G4) Given any two constraints c and d, we define the generalization operator
CHWiden, denoted CHW , as follows: c CHW d =def c W ch(c, d ).
(G5) Given any two constraints c and d, we define the generalization operator
CHWidenPlus, denoted CHWP , as follows: c CHWP d =def c WP ch(c, d ).
     Note that if we combine the generalization operator Top and the convex hull
operator, we get the Top operator again.

5    Experimental Evaluation
In this section we report the results of the experiments we have performed on
several examples of verification of infinite state reactive systems.
    We have implemented the verification algorithm presented in Section 2 by
using MAP [26], an experimental system for the transformation of constraint


                                             11
logic programs. The MAP system is implemented in SICStus Prolog 3.12.8 and
uses the clpq library to operate on constraints.
    Now we give a brief description of the experiments we have conducted (see
also Tables 1 and 2).
    We have considered the following mutual exclusion protocols. (i) Bakery [20]:
we have verified safety (that is, mutual exclusion) and liveness (that is, starvation
freedom) in the case of two processes, and safety in the case of three processes;
(ii) MutAst [21]: we have verified safety in the case of two processes; (iii) Pe-
terson [29]: we have considered a counting abstraction [11] of this protocol and
we have verified safety in the case of N processes; and (iv) Ticket [3]: we have
considered the case of two processes and we have verified safety and liveness.
     We have also verified safety properties of the following cache coherence proto-
cols: (v) Berkeley RISC, (vi) DEC Firefly, (vii) IEEE Futurebus+, (viii) Illinois
University, (ix) MESI, (x) MOESI, (xi) Synapse N+1, and (xii) Xerox PARC
Dragon. These protocols are used in shared-memory multiprocessing systems for
guaranteeing data consistency of the local cache associated with every proces-
sor [18]. We have considered parameterized versions of the above protocols, that
is, protocols designed for an arbitrary number of processors. Similarly to [11], we
have applied our verification method to counting abstractions of the protocols.
    We have also verified safety properties of the following systems. (xiii) Bar-
ber [6]: we have considered a parameterized version of this protocol with a single
barber process and an arbitrary number of customer processes; (xiv) Bounded
and Unbounded Buffer : we have considered protocols for two producers and two
consumers which communicate via a bounded and an unbounded buffer, respec-
tively (the encodings of these protocols are taken from [13]); (xv) CSM, which is
a central server model described in [12]; (xvi) Insertion Sort and Selection Sort:
we have considered the problem of checking array bounds of these two sort-
ing algorithms, parameterized w.r.t. the size of the array, as presented in [13];
(xvii) Office Light Control [7], which is a protocol for controlling how office lights
are switched on or off, depending on room occupancy; (xviii) Reset Petri Nets,
which are Petri Nets augmented with reset arcs: we have considered a reacha-
bility problem for a net which is a variant of one presented in [23] (unlike [23],
in the net we have considered there is no bound on the number of tokens that
can reside in a place and, therefore, our net is an infinite state system).
    Table 1 shows the results of running the MAP system on the above examples
by choosing different combinations of a wqo W and a generalization opera-
tor G introduced in Section 4. In the sequel we will denote that combination
by W &G. The combinations MaxCoeff &CHWiden, MaxCoeff &CHWidenPlus,
MaxCoeff &Top, and MaxCoeff &Widen have been omitted because they give re-
sults which are very similar to those obtained by using SumCoeff, instead of
MaxCoeff. HomeoCoeff &CHWidenPlus and HomeoCoeff &WidenPlus have been
omitted because, as already mentioned in Section 4, these combinations do not
satisfy the conditions given in Definition 2, and thus, they do not guarantee the
termination of the specialization strategy.

                                         12
    If we consider precision, that is, the number of properties which have been
proved, the best combination is SumCoeff &WidenPlus (23 properties proved out
of 23) closely followed by MaxCoeff &WidenPlus and SumCoeff &CHWidenPlus
(22 properties proved out of 23). The verification times for SumCoeff &CHWiden-
Plus are definitely worse than the ones for the other two combinations: indeed, on
average, for each proved property, SumCoeff &CHWidenPlus takes 2990 millisec-
onds, while MaxCoeff &WidenPlus and SumCoeff &WidenPlus take 730 millisec-
onds and 820 milliseconds, respectively. This is due to the fact that SumCoeff &
CHWidenPlus introduces more definitions than the other two strategies. (For
lack of space, we do not report average times for the other generalization strate-
gies, nor we present statistics on the number of generated definitions.) Table 1
also shows that by using the Top and the Widen generalization operators the
specialization time is quite good, but the precision of verification is low. In other
terms, the Top and Widen operators determine an over-generalization. In con-
trast, SumCoeff &WidenPlus and MaxCoeff &WidenPlus ensure a good balance
between time and precision.
    In conclusion, the specialization strategy which uses the generalization strat-
egy SumCoeff &WidenPlus outperforms the others, closely followed by the spe-
cialization strategy which uses MaxCoeff &WidenPlus. In particular, the general-
ization strategies based either on the homeomorphic embedding (that is, Homeo-
Coeff ) or the combination of the widening and convex hull operators (that is,
Widen and CHWiden) are not the best choices in our examples.
    In order to compare our MAP implementation of the verification method
with other constraint-based model checking tools for infinite state systems, we
have run the verification examples described above on the following systems:
(i) ALV [7], which combines BDD-based symbolic manipulation for boolean and
enumerated types, with a solver for linear constraints on integers, (ii) DMC [13],
which computes (approximated) least and greatest models of CLP(R) programs,
and (iii) HyTech [19], a model checker for hybrid systems.
    Table 2 reports the results of our experiments obtained by using various
options available in those verification systems. All experiments with MAP, ALV,
DMC, and HyTech have been conducted on an Intel Core 2 Duo E7300 2.66GHz
under the Linux operating system.
    In terms of precision, MAP with the SumCoeff &WidenPlus option is the best
system (23 properties proved out of 23), followed by DMC with the A option (19
out of 23), ALV with the default option (18 out of 23), and, finally, HyTech with
the Bw option (17 out of 23). Among the above mentioned systems and respective
options, HyTech (Bw) has the best average running time (70 milliseconds per
proved property), followed by MAP and DMC (both 820 milliseconds), and
ALV (8480 milliseconds). This is explained by the fact that the HyTech with
the Bw option tries to prove a safety property with a very simple strategy, by
constructing the reachability set backwards from the property to be proved,
while the other systems apply more sophisticated techniques. Note also that the
average verification times are affected by an uneven behavior on some specific
examples. For instance, in the Bounded and Unbounded Buffer examples the

                                         13
  Generalization G:     CHWiden CHWidenPlus         Top          Widen    WidenPlus
EXAMPLE wqo W:          HC    SC       SC          HC   SC      HC     SC MC SC
Bakery 2 (safety)         20    70           20   30   40        20     60    30    20
                          20    50           20   20   30        20     40    30    20
Bakery 2 (liveness)       60   120           80   80 100         70    130    80    70
                          40    80           60   50   60        50     90    60    50
Bakery 3 (safety)        160   800          180 2420 3010       170    750   180   160
                         150   430          170 730 680         160    380   170   150
MutAst                   230   440          440 2870 2490       220    370    70   140
                         200   390          420 330 220         190    320    70   140
Peterson N                ∞     ∞          1370   ∞    ∞         ∞      ∞    210   230
                          ∞    410         1370   ∞    30        ∞     250   210   230
Ticket (safety)           30    30           20   20   30        20     30    20    40
                          30    20           10   20   20        20     20    10    30
Ticket (liveness)         90   120          120 100 110         100    100   110   110
                          50    70           70   60   60        60     50    60    60
Berkeley RISC             60   60           200   70   30   50  50 30 30
                          60   40           170   50   20   50  30 30 30
DEC Firefly              190 120            340 100    80 180 120 30 20
                         100   60           160   40   20   90  60 30 20
IEEE Futurebus+           ∞ 47260         47260   ∞ 15630   ∞ 4720 100 2460
                          ∞ 290             290   ∞    30   ∞ 230 100 270
Illinois University       50   80            40 140    90   50  70 40 20
                          50   60            40   60   30   50  50 30 10
MESI                     100   50           130 100    70 100   50 30 30
                          80   40           120   50   20   80  40 30 30
MOESI                    980 160            180 930 100 940 160 50 60
                         950   60            80 860    30 910   60 50 50
Synapse N+1               30   10            10   20   20   20  10 10 10
                          20   10            10   20   10   10  10 10 10
Xerox PARC Dragon       1230   80           280 1140   50 1210  70 30 40
                        1180   60           260 1110   20 1160  50 30 40
Barber                 41380 30150         2740    ∞      ∞ 40750 29030 1210 1170
                        3260 3100          2620 900      410 2630 1620 1170 1130
Bounded Buffer         73990 370           6790 71870     20 75330 340 3520 3540
                       73190 170           6780 71850     20 74550 140 2040 2060
Unbounded Buffer          ∞     ∞           410    ∞      ∞     ∞    ∞ 3890 3890
                         310 130            410 140       10 280 100 360 360
CSM                       ∞     ∞          4710    ∞      ∞     ∞    ∞ 6380 6580
                          ∞ 620            4700    30     20    ∞ 440 6300 6300
Insertion Sort            80    80          160 110       80    70   70 90 100
                          80    60          150    30     20    70   50 90 100
Selection Sort            ∞     ∞           200    ∞      ∞     ∞    ∞ ∞ 190
                         380    80          200    40     40 340     70 770 180
Office Light Control      40    50           50    40     30    50   50 50 50
                          30    40           40    30     30    40   40 40 40
Reset Petri Nets          ∞     ∞            ∞     ∞      ∞     ∞    ∞     0    0
                          10    10           10     0     10     0   10    0    0

Table 1. Comparison of various generalization strategies used by the MAP system
in the examples listed in the leftmost column. HC , MC , and SC denote the wqo
HomeoCoeff, MaxCoeff, and SumCoeff, respectively. Times are expressed in millisec-
onds. The precision is 10 milliseconds. ‘0’ means termination in less than 10 millisec-
onds. ‘∞’ means ‘No answer’ within 100 seconds. For each example we have two lines:
the first line shows the total verification time (Phases 1 and 2) and the second line
shows the program specialization time (Phase 1 only).

                                          14
MAP system has higher verification times with respect to the other systems,
because these examples can be easily verified by backward reachability, thereby
making the MAP specialization phase redundant. On the opposite side, MAP is
much more efficient than the other systems in the Peterson and CSM examples,
where the specialization phase definitely pays off.

6   Conclusions
In this paper we have proposed some improvements of the method presented
in [15] for verifying infinite state reactive systems. First, we have reformulated
the verification method as a two-phase method: (1) in the first phase a CLP
specification of the reactive system is specialized w.r.t. the initial state and the
temporal property to be verified and (2) in the second phase the perfect model
of the specialized program is constructed in a bottom-up way. For Phase (1)
we have considered various specialization strategies which employ different well-
quasi orderings and generalization operators to guarantee always terminating
transformations. These orderings and generalization operators are either new or
adapted from similar notions, such as, convex hull, widening, and homeomorphic
embedding, defined in the context of static analysis of programs [10] and program
specialization [22,24,34].
    We have applied these specialization strategies to several examples of infinite
state systems taken from the literature and we have compared the results in
terms of efficiency and precision (that is, the number of proved properties).
On the basis of our experimental results we have found some strategies that
outperform the others in terms of a good balance of efficiency and precision. In
particular, the strategies based on the convex hull, widening, and homeomorphic
embedding, do not appear to be the best strategies in our examples.
    Then, we have applied other model checking tools for infinite state systems
(in particular, ALV [7], DMC [13], and HyTech [19]) to the same set of examples.
The experimental results show that the specialization-based verification system
(with the best performing strategies) is competitive with respect to the other
tools.
    Our approach is closely related to other verification methods for infinite state
systems based on the specialization of (constraint) logic programs [23,25,28].
Unlike the approach proposed in [23,25] we use constraints, which give us very
powerful means of dealing with infinite sets of states. The specialization-based
verification method presented in [28] consists of one phase only incorporating
both top-down query directed specialization and bottom-up answer propagation.
This method is restricted to definite constraint logic programs and makes use of
a generalization technique which combines widening and convex hull computa-
tions for ensuring termination. In [28] only two examples have been presented
(the Bakery protocol and a Petri net) and no verification times are reported.
Thus, it is hard to make a comparison in terms of effectiveness with respect to
the method we have presented here. However, as already mentioned, the gener-
alization techniques based on the widening and the convex hull operators do not
turn out to be the best options in our examples.


                                        15
                            MAP              ALV       DMC    HyTech
  EXAMPLE               SC &WidenPlus default A F L noAbs Abs Fw Bw
 Bakery 2 (safety)                   20      20 30 90 30            10 30     ∞ 20
 Bakery 2 (liveness)                 70      30 30 90 30            60 70     × ×
 Bakery 3 (safety)                  160     580 570 ∞ 600          460 3090   ∞ 360
 MutAst                             140      ⊥ ⊥ 910 ⊥             150 1370   70 130
 Peterson N                         230   71690 ⊥ ∞ ∞               ∞ ∞       70 ∞
 Ticket (safety)                     40      ∞ 80 30 ∞              ∞ 60      ∞ ∞
 Ticket (livenes)                   110      ∞ 230 40 ∞             ∞ 220     × ×

 Berkeley RISC                       30         10   ⊥   20 60      30 30     ∞ 20
 DEC Firefly                         20         10   ⊥   20 80      50 80     ∞ 20
 IEEE Futurebus+                   2460        320   ⊥   ∞ 670    4670 9890   ∞ 380
 Illinois University                 20         10   ⊥   ∞ 140      70 110    ∞ 20
 MESI                                30         10   ⊥   20 60      40 60     ∞ 20
 MOESI                               60         10   ⊥   40 100     50 90     ∞ 10
 Synapse N+1                         10         10   ⊥   10 30       0    0   ∞   0
 Xerox PARC Dragon                   40         20   ⊥   40 340     70 120    ∞ 20

 Barber                            1170     340 ⊥        90 360    140 230 ∞      90
 Bounded Buffer                    3540       0 10       ∞ 20       20 30 ∞       10
 Unbonded Buffer                   3890      10 10       40 40      ∞ ∞ ∞         20
 CSM                               6580   79490 ⊥        ∞ ∞        ∞ ∞ ∞         ∞
 Insertion Sort                     100      40 60       ∞ 70       30 80 ∞       10
 Selection Sort                     190      ∞ 390       ∞ ∞        ∞ ∞ ∞         ∞
 Office Light Control                50      20 20       30 20      10 10 ∞       ∞
 Reset Petri Nets                     0      ∞ ⊥         ∞ 10        0   0 ∞      10

Table 2. Comparison of the MAP system with the verification systems ALV, DMC,
and HyTech. Times are expressed in milliseconds. The precision is 10 milliseconds.
(i) ‘0’ means termination in less than 10 milliseconds. (ii) ‘⊥’ means termination
with the answer: ‘Unable to verify’. (iii) ‘∞’ means ‘No answer’ within 100 seconds.
(iv) ‘×’ means that the test has not been performed (indeed, HyTech has no built-in
for checking liveness). For the MAP system we show the total verification time with the
SumCoeff &WidenPlus option (see the last column of Table 1). For the ALV system
we show the time for four options: default, A (with approximate backward fixpoint
computation), F (with approximate forward fixpoint computation), and L (with com-
putation of loop closures for accelerating reachability). For the DMC system we show
the time for two options: noAbs (without abstraction) and Abs (with abstraction).
For the HyTech system we show the time for two options: Fw (forward reachability)
and Bw (backward reachability).




                                          16
    Another approach based on program transformation for verifying parameter-
ized (and, hence, infinite state) systems has been presented in [32]. This approach
is based on unfold/fold transformations which are more general than the ones
used in the specialization strategy considered in this paper. However, the strat-
egy for guiding the unfold/fold rules proposed in [32] works in fully automatic
mode in a small set of examples only.
    Finally, we would like to mention that our verification method can be viewed
as complementary with respect to the methods based on static analysis, such
as [5,13]. These methods work by constructing approximations of program mod-
els (which can be least or greatest models, depending on the specific technique)
in a bottom-up way. However, these methods may fail to prove a property simply
because they compute a subset (or a superset) of the program model.
    One further enhancement of our method could be achieved by replacing the
bottom-up, precise computation of the perfect model performed in our Phase
(2) by an approximated computation, in the style of [5,13]. Finding the optimal
combination, in terms of both efficiency and precision, of program specialization
and static analysis techniques for the verification of infinite state systems is an
interesting direction for future research.

References
 1. P. A. Abdulla, K. Cerans, B. Jonsson, and Y.-K. Tsay. General decidability theo-
    rems for infinite-state systems. Proc. LICS’96, pages 313–321. IEEE Press, 1996.
 2. P. A. Abdulla, G. Delzanno, N. Ben Henda, and A. Rezine. Monotonic abstrac-
    tion (On efficient verification of parameterized systems). International Journal of
    Foundations of Computer Science, 20(5):779–801, 2009.
 3. G. R. Andrews. Concurrent Programming: Principles and Practice. Addison-
    Wesley, 1991.
 4. K. R. Apt and R. N. Bol. Logic programming and negation: A survey. Journal of
    Logic Programming, 19, 20:9–71, 1994.
 5. G. Banda and J. P. Gallagher. Constraint-based abstraction of a model checker
    for infinite state systems. Proc. WLP 2009. Potsdam, Germany, 2009.
 6. T. Bultan. BDD vs. constraint-based model checking: An experimental evaluation
    for asynchronous concurrent systems. Proc. TACAS 2000, LNCS 1785, pages 441–
    455. Springer, 2000.
 7. T. Bultan and T. Yavuz-Kahveci. Action Language Verifier. Proc. ASE 2001,
    pages 382–386. IEEE Press, 2001.
 8. W. Chen and D. S. Warren. Tabled evaluation with delaying for general logic
    programs. JACM, 43(1), 1996.
 9. E. M. Clarke, O. Grumberg, and D. Peled. Model Checking. MIT Press, 1999.
10. P. Cousot and N. Halbwachs. Automatic discovery of linear restraints among
    variables of a program. Proc. POPL’78, pages 84–96. ACM Press, 1978.
11. G. Delzanno. Constraint-based verification of parameterized cache coherence pro-
    tocols. Formal Methods in System Design, 23(3):257–301, 2003.
12. G. Delzanno, J. Esparza, and A. Podelski. Constraint-based analysis of broadcast
    protocols. Proc. CSL ’99, LNCS 1683, pages 50–66. Springer, 1999.
13. G. Delzanno and A. Podelski. Constraint-based deductive model checking. Inter-
    national Journal on Software Tools for Technology Transfer, 3(3):250–270, 2001.


                                          17
14. J. Esparza. Decidability of model checking for infinite-state concurrent systems.
    Acta Informatica, 34(2):85–107, 1997.
15. F. Fioravanti, A. Pettorossi, and M. Proietti. Verifying CTL properties of infinite
    state systems by specializing constraint logic programs. Proc. VCL’01, Tech. Rep.
    DSSE-TR-2001-3, pages 85–96. University of Southampton, UK, 2001.
16. F. Fioravanti, A. Pettorossi, and M. Proietti. Verifying infinite state systems by
    specializing constraint logic programs. Report 657, IASI-CNR, Roma, Italy, 2007.
17. L. Fribourg and H. Olsén. Proving safety properties of infinite state systems by
    compilation into Presburger arithmetic. Proc. CONCUR’97, LNCS 1243, pages
    96–107. Springer, 1997.
18. J. Handy. The Cache Memory Book. Morgan Kaufman, 1998. Second Edition.
19. T. A. Henzinger, P.-H. Ho, and H. Wong-Toi. HYTECH: A model checker for
    hybrid systems. International Journal on Software Tools for Technology Transfer,
    1(1-2):110–122, 1997.
20. L. Lamport. A new solution of Dijkstra’s concurrent programming problem. Com-
    munications of the ACM, 17(8):453–455, 1974.
21. D. Lesens and H. Saı̈di. Abstraction of parameterized networks. Electronic Notes
    of Theoretical Computer Science, 9:41, 1997.
22. M. Leuschel. Improving homeomorphic embedding for online termination. Proc.
    LOPSTR’98, LNCS 1559, pages 199–218. Springer, 1999.
23. M. Leuschel and H. Lehmann. Coverability of reset Petri nets and other well-
    structured transition systems by partial deduction. Proc. CL 2000, LNAI 1861,
    pages 101–115. Springer, 2000.
24. M. Leuschel, B. Martens, and D. De Schreye. Controlling generalization and poly-
    variance in partial deduction of normal logic programs. ACM Transactions on
    Programming Languages and Systems, 20(1):208–258, 1998.
25. M. Leuschel and T. Massart. Infinite state model checking by abstract interpre-
    tation and program specialization. Proc. LOPSTR’99, LNCS 1817, pages 63–82.
    Springer, 2000.
26. The MAP Transformation System. www.iasi.cnr.it/∼proietti/system.html, 2010.
27. U. Nilsson and J. Lübcke. Constraint logic programming for local and symbolic
    model-checking. Proc. CL 2000, LNAI 1861, pages 384–398. Springer, 2000.
28. J. C. Peralta and J. P. Gallagher. Convex hull abstractions in specialization of clp
    programs. Proc. LOPSTR 2002, LNCS 2664, pages 90–108, 2003.
29. G. L. Peterson. Myths about the mutual exclusion problem. Information Processing
    Letters, 12(3):115–116, 1981.
30. A. Pnueli and E. Shahar. A platform for combining deductive with algorithmic
    verification. Proc. CAV’96, LNCS 1102, pages 184–195. Springer, 1996.
31. Y. S. Ramakrishna, C. R. Ramakrishnan, I. V. Ramakrishnan, S. A. Smolka,
    T. Swift, and D. S. Warren. Efficient model checking using tabled resolution.
    Proc.CAV’97, LNCS 1254, pages 143–154. Springer, 1997.
32. A. Roychoudhury, K. Narayan Kumar, C. R. Ramakrishnan, I. V. Ramakrish-
    nan, and S. A. Smolka. Verification of parameterized systems using logic program
    transformations. Proc. TACAS 2000, LNCS 1785, pages 172–187. Springer, 2000.
33. H. B. Sipma, T. E. Uribe, and Z. Manna. Deductive model checking. Formal
    Methods in System Design, 15:49–74, 1999.
34. M. H. Sørensen and R. Glück. An algorithm of generalization in positive super-
    compilation. Proc. ILPS’95, pages 465–479. MIT Press, 1995.
35. L. D. Zuck and A. Pnueli. Model checking and abstraction to the aid of parameter-
    ized systems (A survey). Computer Languages, Systems & Structures, 30(3-4):139–
    169, 2004.



                                          18
Appendix

A Kripke structure [9] is a 4-tuple hS, I, R, Li, where: (1) S is a set of states,
(2) I ⊆ S is a set of initial states, (3) R ⊆ S × S is a transition relation from
states to states, and (4) L: S → P(Elem) is a labeling function that assigns to
each state s ∈ S a subset L(s) of Elem, that is, a set of elementary properties
that hold in s. In particular, there exist the elementary properties true, false,
and initial and we have that, for all s ∈ S, (i) true ∈ L(s), (ii) false 6∈ L(s), and
(iii) initial ∈ L(s) iff s ∈ I. We assume that R is a total relation, that is, for every
state s ∈ S there exists at least one state s0 ∈ S, called a successor state of s,
such that s R s0 .
     A computation path in a Kripke structure K is an infinite sequence of states
s0 s1 . . . such that si R si+1 for every i ≥ 0.
     The Computation Tree Logic (CTL, for short) [9] is a propositional tempo-
ral logic interpreted over a given Kripke structure. A CTL formula ϕ has the
following syntax:
     ϕ ::= e | not(ϕ) | and (ϕ1 , ϕ2 ) | ex (ϕ) | eu(ϕ1 , ϕ2 ) | af (ϕ)
where e belongs to the set Elem of the elementary properties. Note that the set
{ex , eu, af } of operators is sufficient to express all CTL properties, because the
other CTL operators introduced in [9] can be defined in terms of ex , eu, and af .
In particular, the operator ef is defined as eu(true, ϕ).
     The operators ex , eu, and af have the following semantics. The property
ex (ϕ) holds in a state s if there exists a successor s0 of s such that ϕ holds in
s0 . The property eu(ϕ1 , ϕ2 ) holds in a state s if there exists a computation path
π starting from s such that ϕ1 holds in all states of a finite prefix of π and ϕ2
holds on the rest of the path. The property af (ϕ) holds in a state s if on every
computation path π starting from s there exists a state s0 where ϕ holds.
     Formally, the semantics of CTL formulas is given by defining the satisfaction
relation K, s |= ϕ, which tells us when a formula ϕ holds in a state s of the
Kripke structure K.

Definition 3 (Satisfaction Relation for a Kripke Structure). Given a
Kripke structure K = hS, I, R, Li, a state s ∈ S, and a formula ϕ, we inductively
define the relation K, s |= ϕ as follows:
K, s |= e iff e is an elementary property belonging to L(s)
K, s |= not(ϕ) iff it is not the case that K, s |= ϕ
K, s |= and (ϕ1 , ϕ2 ) iff K, s |= ϕ1 and K, s |= ϕ2
K, s |= ex (ϕ) iff there exists a computation path s0 s1 . . . in K such that
                     s = s0 and K, s1 |= ϕ
K, s |= eu(ϕ1 , ϕ2 ) iff there exists a computation path s0 s1 . . . in K such that
                          (i) s = s0 and (ii) for some n ≥ 0 we have that:
                          K, sn |= ϕ2 and K, sj |= ϕ1 for all j ∈ {0, . . . , n−1}
K, s |= af (ϕ) iff for all computation paths s0 s1 . . . in K, if s = s0 then
                     there exists n ≥ 0 such that K, sn |= ϕ.                      




                                          19