<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Specification Predicates with Explicit Dependency Information</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Richard Bubel</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Reiner H¨ahnle</string-name>
          <email>reiner@chalmers.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Peter H. Schmitt</string-name>
          <email>pschmitt@ira.uka.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of Computer Science and Engg., Chalmers Univ. of Technology</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Dept. of Computer Science</institution>
          ,
          <addr-line>Univ. of Karlsruhe</addr-line>
        </aff>
      </contrib-group>
      <fpage>28</fpage>
      <lpage>43</lpage>
      <abstract>
        <p>Specifications of programs use auxiliary symbols to encapsulate concepts for a variety of reasons: readability, reusability, structuring and, in particular, for writing recursive definitions. The definition of these symbols often depends implicitly on the value of other locations such as fields that are not stated explicitly as arguments. These hidden dependencies make the verification process substantially more difficult. In this paper we develop a framework that makes dependency on locations explicit. This allows to define general simplification rules that avoid unfolding of predicate definitions in many cases. A number of non-trivial case studies show the usefulness of the concept.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        {nonNullArray(a)} j := j + 1; {nonNullArray(a)}
nonNullArray(a) → wp(j := j + 1, nonNullArray(a))
nonNullArray(a) → hj := j + 1;inonNullArray(a) .
To prove this claim na¨ıvely entails unpacking the definition of nonNullArray
or to define a special-purpose predicate transformer having specific knowledge
about the state dependencies in its definition. For example, the definition of
nonNullArray contains an implicit dependency on a and on a[i] for all i, but not
on any integer program variable j. The contribution of this paper is a technique
that makes this kind of implicit dependency information explicit in the symbol’s
syntax. This leads to significantly higher automation. The presented approach
has been evaluated in several small examples, e.g. verification of a selection sort
algorithm, but also to verify an implementation of the Schorr-Waite algorithm.
It allows to formulate uniform predicate transformers that can exploit generic
dependencies and that are applicable in a variety of situations. This paper is
partly based on results first published in the first co-author’s dissertation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>The paper is structured as follows. The program logic used in this paper is
introduced in Sect. 2. Syntax and semantics for the explicit dependency notation
called location descriptors are presented in Sect. 3. In Sect. 4 we show how
to prove that the chosen location descriptor is consistent with the predicate’s
definition. Predicate transformers in the form of simplification rules that take
advantage of explicit dependencies are presented in Sect. 5. Sect. 6 presents
applications and case studies that show the usefulness of the concept of location
descriptors. Sect. 7 credits related work and outlines future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Dynamic Logic with Updates</title>
      <p>
        This section sketches the program logic used throughout the paper. We use
object-oriented dynamic logic (ODL) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] which extends standard dynamic logic
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] to cover all essential features of object-orientation, but is small enough for
theoretical purposes.
      </p>
      <p>
        The programming language used in ODL is essentially a stripped down
version of JAVA. It supports all concepts except for inner and anonymous classes,
floating point arithmetic, threads (and, hence, GUI). Additionally, ODL does
not support dynamic method binding or exceptions. Some of these restrictions
stem from open scientific problems (floating points, threads), others are a
consequence of the goal to define a minimalistic object-oriented language into which
all aspects of realistic languages can be compiled without much overhead. We
concentrate on the language aspects essential for the paper and leave out, for
example, object allocation or exceptions. For a full account see [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. When
convenient we use the style of presentation given in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] for the logic JAVA CARD DL.
Definition 1 (Signature). Let Tall := {&gt;, boolean, int, ⊥} ∪ Td denote a
finite set of types. (Tall, v) forms a complete lattice with respect to the
partial order v (modelling the subtype hierarchy). Td is the set of all reference
types (closed by intersection) containing the Null type as least element.
Further, T := Tall − {⊥} denotes the set of all types except the bottom type.
ΣT := {Q, Op, Mod, U, Π, PSym, FSym, VSym} is a typed signature over T ,
where Q, Op are sets containing the classical first-order quantifiers and
operators. Mod contains the box [·] and diamond h·i modalities. U is the set of
elementary updates (defined below) and Π the set of programs. PSym, FSym are
typed predicate and function symbols with arity function α assigning each symbol
its signature. VSym is a set of typed first-order logic variables.
      </p>
      <p>As mentioned in Sect. 1 we distinguish between rigid and state-dependent
(non-rigid) function and predicate symbols with the following notation: PSym =
PSymr ∪ PSymnr and FSym = FSymr ∪ FSymnr. It is useful to single out from
the non-rigid function symbols FSymnr the subset of location function symbols,
i.e., functions used to represent local program variables, fields and arrays that
can be changed by a program or update:
Definition 2 (Location Function Symbols). The set of location function
symbols FSymloc ⊆ FSymnr contains for each arity an infinite number of symbols
including
– for each program variable pv used in an ODL program a constant symbol with
the same name and type;
– for each attribute a of type T declared in a class C of the ODL program, a
unary location function symbol a@(C) : C → T ;
– the array access operator [] : &gt; × int → &gt;.</p>
      <p>Note 1. ODL features separate syntactic categories for program and logic
(firstorder) variables. Program variables are modelled as non-rigid constants and
cannot be quantified. Logic variables are rigid, quantifiable, and must not occur
inside programs. We omit the @(C) suffix from attribute names if no ambiguity
arises and write o.a instead of a(o) for attribute lookup expressions.
Definition 3 (Updates). Given a location function f and terms oi and v.
Then the expression f (o1, . . . , on) := v denotes an elementary update. General
updates are defined inductively. Let U , U1, U2 be updates, then all of the following
expressions are updates as well:
– the sequential composition U1 ; U2,
– the parallel composition U1 || U2,
– the conditional \if (φ); U (where φ is a formula), and
– the quantification \for x; U (binding the first-order variable x in update U ).
Definition 4 (Terms and Formulae). The inductive definition of terms and
formulae is as usual for typed first-order dynamic logic. We define only the less
common cases:
– Let U be an update and ξ a term (formula), then {U } ξ is a term (formula).
– Let φ be a formula and p a program, then [p]φ (partial correctness) and hpiφ
(total correctness) are formulae.</p>
      <p>The following definitions formalise the semantics of formulae and updates.
Definition 5 (ODL Kripke Structure). An ODL Kripke structure K :=
(M, S, ρ) consists of
– A partial first-order model M = (D, I0) providing a domain mapping D that
assigns to each type its domain (with D(int) = Z) and a partial interpretation
I0 for all rigid and non-rigid predicate symbols:</p>
      <p>I0(q) :=
↑ , if q ∈ PSymnr</p>
      <p>G , if q ∈ PSymr, for some G ⊆ D(T1) × . . . × D(Tn)
where q : T1 × . . . × Tn is a predicate symbol (analogous for function symbols)
– A set of states S where each S ∈ S contains an interpretation Inr completing
the partial interpretation I0 to a total interpretation I := I0 ∪˙ Inr by assigning
a meaning to all non-rigid symbols.
– The state transition relation ρ defining the programs’ semantics, where for a
program p and two states S1, S2 ∈ S the relation ρ(p)(S1, S2) holds if and only
if executing p in state S1 terminates in the final state S2. As ODL programs
are deterministic the final state is unique whenever it exists.</p>
      <p>A function β : VSym → ST ∈T D(T ) assigning to all logic variables an
element of the universe of the appropriate type is a variable assignment.</p>
      <sec id="sec-2-1">
        <title>Definition 6 (Semantic Location). A semantic location is defined as a tuple</title>
        <p>hf, (e1, . . . , en)i where f : T1 × . . . × Tn → T is a location function symbol as in
Def. 2 and ei (for i ∈ {1, . . . , n}) are elements in D(Ti).</p>
      </sec>
      <sec id="sec-2-2">
        <title>Definition 7 ((Consistent) Semantic Update). An elementary semantic</title>
        <p>update is a pair (hf, (e1, . . . , en)i, d) where hf, (e1, . . . , en)i is a semantic location
with f : T1 × . . . × Tn → T and d an element of D(T ). A (possible empty) set
of elementary semantic updates is called semantic update. A semantic update is
called consistent, if it contains for any semantic location at most one elementary
semantic update.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Definition 8 (Application of a Consistent Semantic Update). The appli</title>
        <p>cation of a consistent semantic update CU is a mapping between states. Applying
CU in a state S maps it to the state S0 = CU (S) that coincides on the value of
all location function with S except for the semantic locations occurring in CU :
whenever (hf, (e1, . . . , en)i, d) ∈ CU then S0(f )(e1, . . . , en) evaluates to d.</p>
        <p>
          Updates are an explicit notation to capture symbolic state changes. The
definition of a semantics for updates is rather technical due to clashes when
an update assigns different values to the same location. Sequential composition
describes two successive state changes, while parallel composition updates the
locations simultaneously and may cause clashes by updating the same location
with differing values, e.g., l := t1 || l := t2. We use a last-one-wins clash
resolution, i.e., in the resulting state location l has the value t2. Quantified updates
allow us to represent state changes of infinitely many locations. Resolution of
clashes caused by quantified updates is left out for space reasons (see [
          <xref ref-type="bibr" rid="ref18 ref3">3, 18</xref>
          ]).
        </p>
        <sec id="sec-2-3-1">
          <title>Example 1. Some updates and their intended semantics:</title>
          <p>– The elementary update x := t assigns to the program variable x the value t.</p>
          <p>Applying it to a program variable y in {x := t} y results in t if x and y are
the same variable, otherwise, the term evaluates to y.
– The parallel update x := y || y := x swaps the content of the program
variables x and y. Parallel updates are evaluated simultaneously and, therefore,
are independent of each other. More formally, let K := (M, S, ρ) be an ODL
Kripke structure, S ∈ S and β a variable assignment. Evaluating the above
update under (K, S, β) results in the consistent update CU mapping S to
a state CU (S) coinciding with S except for the program variables x and y
whose values in (K, S, β) are swapped.
– The quantified update \for i; a[i] := 0 assigns 0 to all components of a.</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>We continue by defining the semantics of terms and formulae:</title>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>Definition 9 (Semantics of Terms and Formulae). The inductive semantic</title>
        <p>
          definitions are as usual. We list only a few non-obvious cases, full definitions
are in [
          <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
          ].
        </p>
        <p>Let K := (M, S, ρ) denote an ODL Kripke structure, t(K,S,β) the evaluation
of term t in state S under a variable assignment β, and |= the validity relation.
– ({U } t)(K,S,β) := t(K,S0,β) where S0 = U (K,S,β)(S)
– S, β |= h i</p>
        <p>
          p φ (p ∈ Π) iff a state S0 exists with S0, β |= φ and ρ(p)(S, S0)
– S, β |= [p]φ (p ∈ Π) iff S0, β |= φ holds for any state S0 with ρ(p)(S, S0)
– S, β |= {U } φ iff S0, β |= φ, where S0 = U (K,S,β)(S)
Later in the paper we need means to relate two arbitrary states on the
syntactic level. For this we use a special kind of update called anonymous program
update whose purpose is to perform a state transition to an unspecified state.
Anonymous programs of this kind are well-known from propositional dynamic
logic [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. They are deterministic and terminating.
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>Definition 10 (Anonymous Program, Anonymous Program Update).</title>
        <p>The atomic programs st1, st2 . . . are called (elementary) anonymous programs.
The set of programs Π is extended accordingly. Further, we extend the inductive
definition of updates by including the elementary anonymous program update
ωi for each anonymous program sti.</p>
      </sec>
      <sec id="sec-2-6">
        <title>Definition 11 (Semantics of Anonymous Program Updates). An anony</title>
        <p>mous program sti is interpreted in an ODL Kripke Structure K such that for all
S ∈ S there exists exactly one state S0 ∈ S such that ρ(sti)(S, S0) holds. An
anonymous program update ωi is then evaluated to a state transformer such
that ωi(K,S,β)(S) = S0 for all variable assignments β.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Symbols with Explicit Dependencies</title>
      <p>In this section we introduce a syntactic notation for non-rigid symbols that
renders explicit the implicit dependencies on the state in their definition.
3.1</p>
      <sec id="sec-3-1">
        <title>Location Descriptors</title>
        <p>
          We need a notation to describe sets of locations. We use location descriptors
introduced in the KeY-system [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] for modifies/assignable clauses. The origin of
this notation goes back to quantified updates [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], see also Sect. 2. Location
descriptors permit a compact and extensional characterisation of location sets.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Definition 12 (Location Descriptor). A location descriptor has the form</title>
        <p>\for x1, . . . , xn; \if (Φ) loc
where (i) x1, . . . , xn are variables bound in Φ and loc, (ii) Φ is an arbitrary
formula, and (iii) loc is a term with a location function symbol as top level
operator, i.e., a program variable, an attribute function or the array access function.
Except for x1, . . . , xn no other free variables occur in Φ or loc.</p>
        <p>Location descriptors ld1, ld2 can be accumulated to sets of location descriptors
by concatenation: ldnew := ld1 ; ld2. In case no variables are bound or Φ is
identical to true, the corresponding parts in the syntax can be omitted.
Example 2. Here are some typical usages of location descriptors:
– \for List x; x.next capturing all next locations of List-typed elements.</p>
        <p>Note that the guard has been omitted here.
– \for T [] a, int i; \if (i &gt;= 0 ∧ i &lt; a.length) a[i] meaning all T -typed array
component locations with indexes between 0 and the array length.3
– \for Tree t; t.left ; \for Tree t; t.right capturing all left and right
locations of Tree-typed elements.</p>
        <p>Definition 13 (Location Descriptor Extension). Let K := (M, S, ρ) be
an ODL Kripke structure with universe D = ST ∈T D(T ). The extension of a
location descriptor ld = \for x1, . . . , xn; \if (Φ) l(t1, . . . , tn) in a given state
S ∈ S is defined as the set of semantic locations:</p>
        <p>ld(K,S) := {h l, (t1, . . . , tn)(K,S,β)i | (K, S, β) |= Φ, β : VSym → D}
Concatenation of location descriptors is evaluated as union of their extensions.
3 The attribute length returns the length for each array and is unspecified otherwise.
Note 2. This definition works for any kind of location descriptor. In particular,
the guard formula Φ and possible subterms of the location term can be arbitrary
ODL formulae and terms that may contain non-rigid symbols or even programs.
3.2</p>
      </sec>
      <sec id="sec-3-3">
        <title>Syntax and Semantics of Symbols with Explicit Dependencies</title>
        <p>The notation introduced above provides a concise way to characterise sets of
locations. Now we extend the names of non-rigid (predicate and function) symbols
by qualifications in the form of location descriptors. The idea is that the value
of thus qualified symbols depends at most on the values of their arguments plus
those locations contained in the extension of their location descriptor.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Definition 14 (Symbols with Explicit Dependencies). Let ld denote a</title>
        <p>semicolon-separated list of location descriptors and let p : T1 × · · · × Tn be a
non-rigid predicate. The non-rigid predicate symbol p[ld] : T1 × · · · × Tn is called
predicate with explicit dependencies. Analogously for functions.</p>
        <p>The definitions of terms and formulae remain unchanged. The only difference
is that the signature contains the above defined location-dependent symbols.
There are only few restrictions on the interpretation of ordinary non-rigid
function and predicate symbols: essentially, their interpretation has to be well-defined
and respect their types. The situation is different for symbols with explicit
dependencies. The interpretation of a symbol p[ld] with explicit dependencies has
to coincide on all states S1, S2 that share the same values for the locations
described by ld. This is precisely formulated in the next two definitions.
Definition 15 (ld-Equivalence). For any Kripke structure K we define for
any location descriptor ld the equivalence relation ≈ld on states where S1 ≈ld S2
iff the following two conditions hold:
1. ld(K,S1) = ld(K,S2) (identical location descriptor extension) and
2. f (K,S1)(d1, . . . , dn) = f (K,S2)(d1, . . . , dn) for any hf, (d1, . . . , dn)i ∈ ld(K,S1).
The additional restriction required for Kripke structures to accomodate symbols
with explicit dependencies is now covered by the definition:</p>
      </sec>
      <sec id="sec-3-5">
        <title>Definition 16 (Dependency-Consistent Kripke Structure). We call an</title>
        <p>ODL Kripke structure K := (M, S, ρ) dependency-consistent if for any predicate
k[ld] (function f [ld]) depending on ld and any two states S1, S2 with S1 ≈ld S2
the evaluation of predicate k[ld] (function f [ld]) in S1 and S2 coincides.
Lemma 1. Let K be a dependency-consistent Kripke structure. Then in any
two states S1, S2 ∈ S with ti(K,S1) = t(K,S2) (i ∈ {1 . . . n}) the atomic formula
i
p[ld](t1, . . . , tn) evaluates to the same truth value whenever S1 ≈ld S2 holds.
Analogously for location dependent function symbols.
Note 3. The notions of satisfiability, validity and model carry over to
dependency-consistent Kripke structures in a straightforward manner. From now on we
deal only with dependency-consistent Kripke structures and the corresponding
notions of model, validity, etc., if not stated otherwise.</p>
        <p>Example 3. Instead of modelling nonNullArray as a non-rigid predicate as done
on p. 28, it can now be modelled as a predicate symbol with explicit
dependencies: nonNullArray[\for T [] a, int i; a[i]] : T []. This expresses explicitly that its
value depends only on the value of the components of T []-typed arrays and, of
course, on its argument.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Correct Definitional Extensions</title>
      <p>Definitional extension in first-order logic preserves consistency, i.e., adding a new
axiom of the form ∀x(p(x) ↔ φ) for a new predicate symbol p not occurring in φ
will not introduce new inconsistencies. For predicates with explicit dependencies
an axiom of the form ∀x : Tp[ld]; (p[ld](x) ↔ φ) may already by inconsistent
with respect to the dependency semantics of Def. 16: one has to ensure that
the implicit state dependencies of φ are reflected in the location descriptor ld.
The problem only concerns the implication ∀x : Tp[ld]; (p[ld](x) → φ). We will
consider only implicational axioms of this type and, to simplify the presentation,
we assume there is only one axiom for each defined predicate. These axioms will
be exploited during proof search as rewrite rules p[ld](t1, . . . , tn) φ(t1, . . . , tn)
named axiomp[ld]→φ, where the ti’s are terms of the appropriate type. In this
setting we allow the defined symbol to occur recursively on the right-hand side.</p>
      <p>We intend to define a proof obligation that, when valid, ensures a given
implicational axiomatisation of a location-dependent symbol to be consistent
with respect to dependency semantics. The problem of termination in the case
of recursive rewrite rules is a different matter and has to be dealt with separately.</p>
      <p>We need a new concept called anonymising update for location descriptors:
this is a quantified update for a given location descriptor that assigns all locations
in its extension a fixed, but unknown value:</p>
      <sec id="sec-4-1">
        <title>Definition 17 (Anonymising Update for Location Descriptors). Let</title>
        <p>ld := \for T o; \if (φ) f (t) be a location descriptor. The quantified update</p>
        <p>Vld := \for T o; \if (φ); f (t) := c(t)
with c being a fresh uninterpreted rigid function symbol of matching type and
arity is called an anonymising update for the location descriptor ld. For ld =
ld1; . . . ; ldn we define Vld = Vld1 || . . . || Vldn.</p>
        <p>With the help of anonymising updates it is possible to formulate dependence
consistency (Def. 16) directly as the proof obligation
pop[ld]→φ
:=
∀x : Tp[ld]; (({ωc} {Vld} φ(x)) ↔ {Vld} φ(x)))
(1)
where ωc is a fresh anonymous program update (Def. 10).</p>
        <p>Theorem 1. If (1) is logically valid, then ∀x : Tp[ld]; (p[ld](x) → φ) is
consistent with respect to dependency semantics.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Simplification Rules</title>
      <p>
        In this section we introduce two update simplification rules that can be applied
to arbitrary atomic formulas with explicit dependencies. We make use of the
following Lemma [3, Sect. 3.9], [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]:
Lemma 2. Two updates U1, U2 are called equivalent if they induce the same
state transformer in any Kripke structure. Then every update U that has no
anonymous program update as a component is equivalent to an update of the form
upP art1 || . . . || upP artlen with upP arti := \for Ti xi; \if (τi); gi(ui) := vali .
      </p>
      <p>Crucial for the first simplification rule is the notion of relevant location
symbol. Relevant location symbols are a syntactic approximation of the location
symbols that a formula φ or a term t may depend on. The notation is Loc(φ)
and Loc(t). In the definition below, and later, we use the following notation:
– h[ld](s1, . . . , sn) stands for a function or predicate symbol with explicit
dependencies, ld = ld1; . . . ; ldq;
– ldi := \for Ti1 oi1; . . . ; Tiri oiri; \if (φi) fi(ti1, . . . , tiαi)
For ease of presentation we assume that fi 6= fj for i 6= j. This affects the
formula ψU1,U2,ld in Def. 20. It is not hard to see how to adapt this formula to
the unrestricted case.</p>
      <sec id="sec-5-1">
        <title>Definition 18 (Relevant Location Symbols).</title>
        <p>Loc(¬ψ) := Loc(ψ)
Loc(Q T x; ψ) := Loc(ψ)
Loc(ψ ◦ φ) := Loc(ψ) ∪ Loc(φ),
Loc(h(s1, . . . , sn)) := Sin=1 Loc(si),
Loc(hprgiψ) := FSymloc
Loc([prg]ψ)
Loc(g(s1, . . . , sn))
Loc(ld)
Q ∈ {∀, ∃}
◦ ∈ {∧, ∨, →, . . .}
h is a rigid symbol
see Def. 2
:::=== {FSgSq}ym∪lSocin=1 Loc(si) g is a location fct. symbol</p>
        <p>see Def. 2
i=1 Loc(fi(ti1, . . . , tiαi)) ld as stipulated above</p>
        <p>∪ Loc(φi)
Loc(h[ld](s1, . . . , sn)) := Loc(ld) ∪ Sin=1 Loc(si)
Loc(h(s1, . . . , sn)) := FSymloc
Loc(U )
Lemma 3. Let φ be an arbitrary formula, K a Kripke structure, S a state in
K, U as in Lemma 2 with gi 6∈ Loc(φ) for all i then</p>
        <p>(K, S) |= φ iff (K, S) |= {U }φ</p>
        <sec id="sec-5-1-1">
          <title>This lemma guarantees the correctness of the following rule.</title>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>Definition 19 (Coincidence Simplification Rule).</title>
        <p>{U } k[ld](s1, . . . , sn)
{U 0} k[ld](s1, . . . , sn)
where U 0 := U − {upP arti | gi 6∈ Loc(k[ld](s1, . . . , sn))} and U as in Lemma 2.
Example 4. Here is an instance of the coincidence simplification rule, where s is a
term with j 6∈ Loc(s): ({i := iV al || j := jV al} p[i](s)) ({i := iV al} p[i](s))
Example 5. The coincidence simplification rule allows to prove the motivating
example on p. 28 easily: in nonNullArray(a) → {j := j + 1} nonNullArray(a)
we use nonNullArray[\for T [] a, int i; a[i]] as introduced in Example 3. For
j 6∈ Loc(a) an instance of this rule simplifies
{j := j + 1} nonNullArray[. . .](a)
nonNullArray[. . .](a)
rendering the implication trivially valid.</p>
        <p>The coincidence simplification rule is of an approximate nature but is
sufficient for many practical purposes. We provide a stronger, semantic
simplification rule in the form of the equivalence simplification rule. We require
sufficient and necessary conditions for the logical equivalence of the two formulae
{U1} k[ld](s1, . . . , sn) and {U2} k[ld](s1, . . . , sn). First we want the argument
terms of k[ld] to evaluate to the same values after the respective updates, i.e.,
.
{U1}sl = {U2}sl for all 1 ≤ l ≤ n. Next we want to formalise that the locations
described by each ldj after update U1 are the same as those described by ldj
after update U2 and that their values coincide:
ψj1 = ∀d; ∀cj; ∀oj; . . . .</p>
        <p>({U1} (φj ∧ cj = tj ∧ d = fj(tj))) → ∃oj({U2} (φj ∧ cj = tj ∧ d = fj(tj)))
ψj2 = ∀d; ∀cj; ∀oj; . . . .</p>
        <p>({U2} (φj ∧ cj = tj ∧ d = fj(tj))) → ∃oj({U1} (φj ∧ cj = tj ∧ d = fj(tj)))
It is now not hard to see that ψU1,U2,ld = Vjq=1(ψj1 ∧ ψj2) ∧ Vln=1{U1}sl =. {U2}sl is
valid if and only if {U1} k[ld](s1, . . . , sn) ↔ {U2} k[ld](s1, . . . , sn) is valid. This
justifies the following rule:</p>
      </sec>
      <sec id="sec-5-3">
        <title>Definition 20 (Equivalence Simplification Rule).</title>
        <p>{U1} k[ld](s1, . . . , sn)</p>
        <p>{U2} k[ld](s1, . . . , sn) provided formula ψU1,U2,ld holds.
Example 6. We modify Ex. 5 by using the more specialised 0-ary predicate
nonNullArraya[\for int i; a[i]] restricting the non-null element property to only
those arrays referenced by the program variable a. Let</p>
        <p>a 6= b → (nonNullArraya[. . .] → {b[0] := null} nonNullArraya[. . .])
the formula to be proven valid where b is an array of the same type as a. Note that
we cannot apply the coincidence simplification rule here: Loc(\for int i; a[i])
yields [·], i.e., the non-rigid array lookup function which is also the leading
function symbol of the update {b[0] := null} . Now we apply the equivalence
simplification rule which gives for formula ψ11:
∀d : T ; c1 : T []; c2 : int; i : int;</p>
        <p>. . .
(({b[0] := null} (true ∧ c1 .= a ∧ c2 .= i ∧ d.= a[i])) →</p>
        <p>∃i : int; (true ∧ c1 = a ∧ c2 = i ∧ d = a[i]))
Under the assumption a 6= b this can be easily proven with a first-order theorem
prover after applying the update.</p>
        <p>Note 4. The calculus with the equivalence simplification rule added is complete
for the logic containing symbols with explicit dependencies relative to the
calculus without those symbols.
6</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Applications and Case Studies</title>
      <p>In this section we show applications of symbols with explicit dependency
information modelling specification predicates such as reachability. Using these
symbols provides advantages for interactive and automated proving. By being
able to delay expanding the definition of a non-recursive predicate, the proof
remains readable for a human reader. The automation benefits from the
availability of general simplification rules reducing the need to look into the definition
of non-rigid symbols. This is briefly illustrated in two case studies.
6.1</p>
      <sec id="sec-6-1">
        <title>Specification predicates</title>
        <p>Reachability. Treatment of linked data structures in specification and verification
of object-oriented programs requires routinely to express reachability between
two objects via a finite chain of fields. Fig. 1 shows a simple binary, directed
tree, where subtrees are accessible via the fields left and right.</p>
        <p>Specifying properties such as an element occurring in a given list requires to
introduce and formalise the concept of reachability for these data structures. In
the following we concentrate on the definition of a predicate formalising
reachability for the directed, binary tree structure in Fig. 1. We aim to define a
left
n2
n1
right</p>
        <p>n3
predicate that takes three arguments x, y (both of type Tree) and n (of type
int) and holds if the subtree y can be reached from tree x in exactly n steps
using only the fields left and right. The recursive definition for the reachability
predicate is rather straightforward:
reach(x, y, n) :⇔
 false , if n &lt; 0
 x =. y , if n =. 0</p>
        <p>.
 (x.left 6=.null ∧reach(x.left, y, n − 1))∨ , if n &gt; 0
 (x.right 6= null ∧reach(x.right, y, n − 1))</p>
        <p>A calculus rule can be easily derived from the definition. The predicate is
clearly non-rigid as it depends not only on the value of its arguments but also
on the left and right fields of the subtrees below and including x.
Therefore, we easily identify a location descriptor capturing all location dependencies,
namely \for Tree t; t.left; \for Tree t; t.right describing the set of all left,
right locations of all trees. Hence, the location-dependent predicate symbol for
capturing reachability in our notation is:</p>
        <p>reach[\for Tree t; t.left; \for Tree t; t.right](Tree, Tree, int) .
6.2</p>
      </sec>
      <sec id="sec-6-2">
        <title>Selection Sort</title>
        <p>Verification of a sorting algorithm consists of showing that the result is indeed
sorted and that the result contains exactly the elements of the original collection,
in other words, that the result is a permutation of the input. For the verification
of a standard selection sort implementation in JAVA a specification predicate with
explicit dependencies has been used to formalise permutation of two arrays.4</p>
        <p>The permutation predicate perm[\for int[] o; int i; o[i]](int[], int[]) is defined
with the help of a recursive count of the occurrences of all elements in both
arrays. The predicate occurs once in the post condition stating that the resulting
array is indeed a permutation of the original, and the second time in the loop
invariant of the sorting algorithm.</p>
        <p>
          The selection sort algorithm starts at the first array element and swaps it with
the first minimal element encountered in the subsequent array continuing until
4 This case study has been joint work and is also reported in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], where the focus was an improved
loop invariant rule.
the last element is reached. Therefore, it utilises two nested loops. To prove that
both loops preserve the permutation property without predicates having explicit
location dependencies requires several inductive subproofs. To substantiate this
claim we look at the following formula5 that occurs as subgoal during the proof
and has to be proven valid:
{\for int i; aCopy[i] := a0[i] || \for int i; a0[i] := get0(a, i)} perm[. . .](a0, aCopy) →
{b10 := F ALSE || b4 := T RU E || b5 := F ALSE || . . . ||
\for int i; aCopy[i] := a0[i] || \for int i; a0[i] := get0(a, i) ||
a0[idx1] := get0(a0, idx2) || a0[idx2] := get0(a0, idx1)} perm[. . .](a0, aCopy)
The update of the formula in the implication’s premise states a transition to
a state where array aCopy is a shallow copy of the old content of array a0.
Simultaneously new values are assigned to the components of array a0. The
complete formula in the premise states then that in this state aCopy and a0 are
permutations.
        </p>
        <p>The update of the formula in the conclusion starts with a number of updates
stemming from branch predicates irrelevant for the permutation property. The
quantified updates assign the components of the arrays aCopy and a0 the exact
same values as it is the case for the premise formula. But the following two
updates lead to a slightly different state with a0[idx1] and a0[idx2] swapped.</p>
        <p>Obviously, the implication is valid, because a standard transposition lemma
can be used to show preservation of the permutation property. The technical
difficulty is that a straightforward application of this transposition lemma is
not possible. One has to prove that all updates preceding the array locations
including the twelve updates abbreviated by “. . . ” have no effect on the
permutation property. This is done inductively using a strengthening of the claim
where for any possible value of b10, b4, b5, . . . the permutation property remains
unchanged.</p>
        <p>In contrast, the approach presented in this paper exploiting dependency
information allows to prove the formula valid without induction. Application of the
Coincidence Simplification Rule (Def. 19) simplifies the state representation in
the conclusion so far that the application of the transposition lemma is directly
possible. We were able to prove the permutation property for an executable JAVA
implementation of selection sort with only one user interaction exploiting the
fact that swapping exactly two elements in an array preserves the permutation
property.
6.3</p>
      </sec>
      <sec id="sec-6-3">
        <title>Schorr-Waite Algorithm</title>
        <p>
          The most complex case study so far where predicates with explicit dependencies
were used is the verification of a fully functional JAVA implementation of the
5 Slightly simplified and beautified.
Schorr-Waite graph marking algorithm [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] for arbitrary finite graph structures.
The abstract algorithm has been verified in several case studies [
          <xref ref-type="bibr" rid="ref1 ref12 ref16 ref21 ref5 ref6 ref7">7, 6, 21, 16, 1, 5,
12</xref>
          ], mostly for the case of binary graphs. Our case study, first reported in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], is
to our knowledge the first time that an executable JAVA implementation for the
general case was verified.
        </p>
        <p>The Schorr-Waite graph marking algorithm saves memory by avoiding to
encode the taken path in the method call stack. Instead a small number of
auxiliary variables and subtle pointer rotations are used to encode the backtracking
path. Besides showing that all reachable nodes are visited, the challenge is to
show that afterwards the graph structure is restored, because the traversal uses
destructive pointer manipulations.</p>
        <p>Specification predicates with explicit dependencies were employed to express
reachability of two nodes in the graph structure and for a specification predicate
that characterises the backtracking path. The reachability predicate has been
specified similar to the one in Sect. 6.1. The predicate characterising the
backtracking path is specified in such a way that it evaluates to true if a given node
is an element of the currently taken path. This is clearly state-dependent as the
backtracking path changes during execution.
7</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Related and Future Work</title>
      <p>
        The Java Modelling Language (JML) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] supports a depends clause used to
express on which other fields a model field depends. The depends clause is then
used to extend assignable clauses appropriately in case they contain model fields.
These depends clauses have been introduced by Leino [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], the main idea being
to replace the occurrence of an abstract/model field a with a function symbol
a0(f1, . . . , fn) that explicitly enumerates the fields it depends on.
      </p>
      <p>
        Separation logic [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] requires to specify exactly those locations of the heap
(alternatively, to separate a heap into two orthogonal heaps) that are necessary
to prove a property. The local judgements are then generalised by application
of a frame rule. This approach is in the following sense complementary to ours:
in separation logic, the shape of the heap is made explicit and dependencies are
lifted with the help of a frame axiom. We make location dependencies explicit
and can, therefore, abstract away from the concrete layout of the heap. This
seems more appropriate for target languages such as JAVA. Another advantage
of our approach is that only standard typed first-order logic is used.
      </p>
      <p>
        Dynamic frames as introduced in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] provide a uniform treatment of
modified locations and dependencies, which are separate concerns in the presented
work allowing for more precision. Of particular interest is the preservance
operator Ξf for a dynamic frame (set of locations) f expressing that a computation
does depend on or modify it. This property can be translated to our framework
and logic. The other operators for dynamic frames deal with variants of modifies
clauses in presence of object creation.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] read effects are used for treating invocations of pure methods. Our
approach is directly targeted at the level of specification predicates without
the need to define them as pure boolean methods, it provides a compact and
precise notion for the dependencies and works without an explicit heap
presentation. Another concern of [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is well-definedness of specifications containing
pure methods. For our approach well-definedness is desirable, but will not cause
unsoundness of the verification system.
      </p>
      <p>Further related work focusses on data groups or similar concepts to support
information hiding and encapsulation. This is ongoing work in the context of
the KeY project.</p>
      <p>Future work includes to exploit further application scenarios for predicates
with explicit dependencies. Ru¨mmer suggested to use them to achieve
secondorder like specification capabilities in a first-order dynamic logic through
parameterisation with functions. This allows, for example, to use the same
specification for the sum over all elements of an array Pi f (a[i]) for any function f .
Further, we will investigate how to improve automation of the equivalence
update simplification rule, i.e., in finding a suitable equivalent update U2.
8</p>
    </sec>
    <sec id="sec-8">
      <title>Conclusion</title>
      <p>
        We introduced non-rigid specification predicate (and function) symbols that
explicitly list the set of locations their value may depend on. Then we presented a
verification framework for such symbols with explicit dependencies. This
framework consists of two parts: on the one hand a uniformly generated proof
obligation that ensures correctness of a predicate with explicit dependencies with
respect to its axiomatisation; on the other hand a number of general
simplification rules that allow one to exploit explicit dependencies within the context
of a proof. Several application scenarios common in program verification as well
as two case studies support the usefulness of our approach. The
implementation and case studies were done within the KeY-system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], however, as pointed
out in the introduction the problem of location-dependent predicates as well as
the solution presented here can be transferred to other program logics based on
weakest precondition reasoning.
      </p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgments</title>
      <p>We thank Philipp Ru¨mmer and Wolfgang Ahrendt for valuable comments.
Special thanks go also to Benjamin Weiß for valuable comments and in particular for
pointing out a problem in the original version of Def. 8. We thank the anonymous
referees for their valuable comments.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>J.</given-names>
            <surname>Abrial</surname>
          </string-name>
          .
          <article-title>Event based sequential program development: Application to constructing a pointer program</article-title>
          . In K. Araki,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gnesi</surname>
          </string-name>
          , and D. Mandrioli, editors,
          <source>Proc. Formal Methods</source>
          , volume
          <volume>2805</volume>
          <source>of LNCS</source>
          , pages
          <fpage>51</fpage>
          -
          <lpage>74</lpage>
          . Springer,
          <year>September 2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. A´d´am Darvas and
          <string-name>
            <surname>K. R. M. Leino</surname>
          </string-name>
          .
          <article-title>Practical reasoning about invocations and implementations of pure methods</article-title>
          . In M. B.
          <string-name>
            <surname>Dwyer</surname>
            and
            <given-names>A</given-names>
          </string-name>
          . Lopes, editors, Fundamental Approaches to Software
          <source>Engineering (FASE)</source>
          , volume
          <volume>4422</volume>
          <source>of LNCS</source>
          , pages
          <fpage>336</fpage>
          -
          <lpage>351</lpage>
          . Springer-Verlag,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>B.</given-names>
            <surname>Beckert</surname>
          </string-name>
          , R. Ha¨hnle, and P. Schmitt, editors.
          <source>Verification of Object-Oriented Software: The KeY Approach</source>
          , volume
          <volume>4334</volume>
          <source>of LNCS</source>
          . Springer-Verlag,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>B.</given-names>
            <surname>Beckert</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Platzer</surname>
          </string-name>
          .
          <article-title>Dynamic logic with non-rigid functions: A basis for object-oriented program verification</article-title>
          . In U. Furbach and N. Shankar, editors,
          <source>Proc. Intl. Joint Conference on Automated Reasoning</source>
          , Seattle, USA, volume
          <volume>4130</volume>
          <source>of LNCS</source>
          , pages
          <fpage>266</fpage>
          -
          <lpage>280</lpage>
          . Springer-Verlag,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>L.</given-names>
            <surname>Birkedal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Torp-Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Reynolds</surname>
          </string-name>
          .
          <article-title>Local reasoning about a copying garbage collector</article-title>
          .
          <source>In Proc. 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL)</source>
          . ACM Press,
          <year>Jan</year>
          .
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>R.</given-names>
            <surname>Bornat</surname>
          </string-name>
          .
          <article-title>Proving pointer programs in Hoare logic</article-title>
          . In R. C.
          <article-title>Backhouse</article-title>
          and
          <string-name>
            <surname>J. N</surname>
          </string-name>
          . Oliveira, editors,
          <source>Mathematics of Program Construction</source>
          , volume
          <volume>1837</volume>
          <source>of LNCS</source>
          , pages
          <fpage>102</fpage>
          -
          <lpage>126</lpage>
          . Springer,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>M.</given-names>
            <surname>Broy</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Pepper</surname>
          </string-name>
          .
          <article-title>Combining algebraic and algorithmic reasoning: An approach to the Schorr-Waite algorithm</article-title>
          .
          <source>ACM Trans. Program. Lang. Syst.</source>
          ,
          <volume>4</volume>
          (
          <issue>3</issue>
          ):
          <fpage>362</fpage>
          -
          <lpage>381</lpage>
          ,
          <year>1982</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>R.</given-names>
            <surname>Bubel</surname>
          </string-name>
          .
          <article-title>Formal Verification of Recursive Predicates</article-title>
          .
          <source>PhD thesis</source>
          , Fakult¨at fu¨r Informatik,
          <source>Univ. Karlsruhe</source>
          ,
          <year>June 2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>E. W.</given-names>
            <surname>Dijkstra</surname>
          </string-name>
          .
          <article-title>A Discipline of Programming</article-title>
          . Prentice-Hall,
          <year>1976</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>D.</given-names>
            <surname>Harel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kozen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. Tiuryn. Dynamic</given-names>
            <surname>Logic</surname>
          </string-name>
          .
          <source>Foundations of Computing</source>
          . MIT Press, Oct.
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>C. A. R.</given-names>
            <surname>Hoare</surname>
          </string-name>
          .
          <article-title>An axiomatic basis for computer programming</article-title>
          .
          <source>Communications of the ACM</source>
          ,
          <volume>12</volume>
          (
          <issue>10</issue>
          ):
          <fpage>576</fpage>
          -
          <lpage>580</lpage>
          , 583, Oct.
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>T.</given-names>
            <surname>Hubert</surname>
          </string-name>
          and
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>March´e. A case study of C source code verification: the Schorr-Waite algorithm</article-title>
          . In B. K. Aichernig and B. Beckert, editors,
          <source>Proc. 3rd IEEE International Conference on Software Engineering and Formal Methods (SEFM)</source>
          , pages
          <fpage>190</fpage>
          -
          <lpage>199</lpage>
          . IEEE Press,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>I. T.</given-names>
            <surname>Kassios</surname>
          </string-name>
          .
          <article-title>Dynamic frames: Support for framing, dependencies and sharing without restrictions</article-title>
          . In J. Misra,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nipkow</surname>
          </string-name>
          , and E. Sekerinski, editors,
          <source>FM</source>
          , volume
          <volume>4085</volume>
          of Lecture Notes in Computer Science, pages
          <fpage>268</fpage>
          -
          <lpage>283</lpage>
          . Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. G. T.
          <string-name>
            <surname>Leavens</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Poll</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Clifton</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Cheon</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Ruby</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Cok</surname>
            ,
            <given-names>P.</given-names>
            Mu¨ller, J. Kiniry, and P.
          </string-name>
          <string-name>
            <surname>Chalin. JML Reference</surname>
            <given-names>Manual</given-names>
          </string-name>
          , Feb.
          <year>2007</year>
          .
          <article-title>Draft revision 1</article-title>
          .
          <fpage>200</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>K. R. M. Leino</surname>
          </string-name>
          .
          <article-title>Toward Reliable Modular Programs</article-title>
          .
          <source>PhD thesis</source>
          , Caltech,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>F.</given-names>
            <surname>Mehta</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Nipkow</surname>
          </string-name>
          .
          <article-title>Proving pointer programs in higher-order logic</article-title>
          . In F. Baader, editor,
          <source>Automated Deduction - CADE-19</source>
          , volume
          <volume>2741</volume>
          <source>of LNCS</source>
          , pages
          <fpage>121</fpage>
          -
          <lpage>135</lpage>
          . Springer,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>J. C.</surname>
          </string-name>
          <article-title>Reynolds. Separation logic: A logic for shared mutable data structures</article-title>
          .
          <source>In Proc. 17th Annual IEEE Symposium on Logic in Computer Science</source>
          , pages
          <fpage>55</fpage>
          -
          <lpage>74</lpage>
          . IEEE Computer Society,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. P.
          <article-title>Ru¨mmer. Sequential, parallel, and quantified updates of first-order structures</article-title>
          . In M. Hermann and A. Voronkov, editors,
          <source>Proc. Logic for Programming</source>
          ,
          <source>Artificial Intelligence and Reasoning</source>
          , Phnom Penh, Cambodia, volume
          <volume>4246</volume>
          <source>of LNCS</source>
          , pages
          <fpage>422</fpage>
          -
          <lpage>436</lpage>
          . Springer-Verlag,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>S.</given-names>
            <surname>Schlager</surname>
          </string-name>
          .
          <article-title>Symbolic Execution as a Framework for Deductive Verification of Object-Oriented Programs</article-title>
          .
          <source>PhD thesis</source>
          , Fakult¨at fu¨r Informatik,
          <source>Univ. Karlsruhe, Feb</source>
          .
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>H.</given-names>
            <surname>Schorr</surname>
          </string-name>
          and
          <string-name>
            <given-names>W. M.</given-names>
            <surname>Waite</surname>
          </string-name>
          .
          <article-title>An efficient machine-independent procedure for garbage collection in various list structures</article-title>
          .
          <source>Commun. ACM</source>
          ,
          <volume>10</volume>
          (
          <issue>8</issue>
          ):
          <fpage>501</fpage>
          -
          <lpage>506</lpage>
          ,
          <year>1967</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          .
          <article-title>Verification of the schorr-waite graph marking algorithm by refinement</article-title>
          ,
          <year>2003</year>
          . Unpublished, http://ropas.kaist.ac.kr/ hyang/paper/dagstuhl-SW.pdf.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>