=Paper=
{{Paper
|id=Vol-1403/paper8
|storemode=property
|title=On Correctness of Graph Programs Relative to Recursively Nested Conditions
|pdfUrl=https://ceur-ws.org/Vol-1403/paper8.pdf
|volume=Vol-1403
|dblpUrl=https://dblp.org/rec/conf/gg/Flick15
}}
==On Correctness of Graph Programs Relative to Recursively Nested Conditions==
On Correctness of Graph Programs
Relative to Recursively Nested Conditions
Nils Erik Flick⋆
Carl von Ossietzky Universität, 26111 Oldenburg, Germany,
flick@informatik.uni-oldenburg.de
Abstract. We propose a new specification language for the proof-based
approach to verification of graph programs by introducing µ-conditions
as an alternative to existing formalisms which can express path proper-
ties. The contributions of this paper are the lifting of constructions from
nested conditions to the new, more expressive conditions, and a proof
calculus for partial correctness relative to µ-conditions.
1 Introduction
Graph transformations provide a formal way to model the graph-based behaviour
of a wide range of systems by way of diagrams. Such systems can be formally
verified. One approach to verification proceeds via model checking of abstrac-
tions, notably Gadducci et al., Baldan et al., König et al., Rensink et al. [4, 1, 10,
18]. This can be contrasted with the proof-based approaches of Habel, Penne-
mann and Rensink [7, 6] and Poskitt and Plump [15]. Here, state properties are
expressed by nested graph conditions, and a program can be proved correct with
respect to a precondition c and a postcondition d. The following figure presents
a schematic overview of the approach, which is also our starting point:
graph program weakest precondition
precondition
c (precondition) calculus
prover
yes, correct
d (postcondition) no
unknown
The correctness proof is done in the style of Dijkstra’s [2] predicate trans-
former approach in Pennemann’s thesis [12], while Poskitt’s thesis [14] features
a Hoare [8] logic for partial and total correctness. Both works are based on
nested conditions, which cannot express non-local properties of graphs, such as
connectivity. In this paper, we consider non-local properties, and we present an
extension to the proof calculus from [12].
⋆
This work is supported by the German Research Foundation (DFG), grant GRK
1765 (Research Training Group – System Correctness under Adverse Conditions)
Our formalism is an extension of nested conditions by recursive definitions. While
several extensions of nested graph conditions to non-local conditions already
exist (Radke [17], Poskitt and Plump [16]), we argue that as opposed to the
former, ours already offers a weakest precondition calculus that can handle any
condition expressible in it; as compared to the latter, which relies more heavily
on expressing properties directly in (monadic second-order) logic, ours is more
closely related to nested conditions and shares the same basic methodology.
Therefore µ-conditions, albeit still work in progress, offer a new viewpoint that
may be sufficiently different from existing ones to be worth investigating.
The outline of the paper is as follows: Section 2 recalls graph programs and
conditions. Sections 3 and 4 introduce µ-conditions and correctness under µ-
conditions, respectively, together with an exemplary application of the method,
Section 5 provides context by listing related work and Section 6 concludes with
an outlook. After the main text, there is an appendix with the proofs.
2 Graph Conditions and Programs
In this section, we introduce graph conditions and graph programs. We assume
familiarity with graph transformation systems in the sense of Ehrig et al. [3], and
the basic notions of category theory. For standard definitions and more details,
we refer the reader to Ehrig et al. [3]. For an in-depth introduction to nested con-
ditions and graph programs and practical approaches to semi-automatic theorem
proving in this context, we refer the reader to Pennemann [12].
Notation. The domain and codomain of a morphism f : G → H are denoted
by dom(f ) = G and cod(f ) = H. Injective morphisms (monomorphisms) are
distinguished typographically by a curly arrow f : G ֒→ H while double-tailed
arrows f : G ։ H denote surjective ones (epimorphisms). We use the symbol
M to denote the class of all graph monomorphisms. A partial morphism is a pair
of monomorphisms with the same domain. The empty graph is denoted by ∅.
All graphs in this paper are assumed to be finite.
A brief review of nested conditions follows. Nested graph conditions were pro-
posed by Habel and Pennemann. Finite nested conditions were later shown to
be equally expressive as graph-interpreted first-order predicate logic. Graph con-
ditions can be used as constraints to specify state properties, or as application
conditions to restrict the applicability of a rule.
Definition 1 (Nested Graph Conditions). Let Cond be the class of nested
conditions, defined inductively as follows (where P, C ′ , C are graphs):
W J is a countable set and for all j ∈ J, cj is a condition (over P ), then
– If
j∈J cj is a condition (over P ). This includes the case J = ∅ (for any P ).
– If c is a condition (over P ), then ¬c is also a condition (over P ).
– If a : P ֒→ C ′ is a monomorphism, ι : C ֒→ C ′ is a monomorphism and c′ is
a condition (over C), then ∃(a, ι, c′ ) is a condition (over P ).
We call c′ a direct subcondition of ∃(α, ι, c′ ), ¬c′ and c′ ∨c′′ and use subcondition
for the reflexive and transitive closure of this syntactically defined relation.
Notation. If c is a condition over P , then P is its type1 and we write c : P ,
and CondP is the class of all conditions over P . We may write ∃(a, c) instead V of
∃(a, W
idcod(a) , c). The usual abbreviations define the other standard operators:
is ¬ ¬, ∀ is ¬∃¬. No morphism satisfies the disjunction over the empty index
set. We introduce false as a notation for it, and true for ¬false. We may omit
the subcondition true (together with ι), writing ∃(a) for ∃(a, ι, true).
When all the index sets are finite, one obtains the finite nested conditions. The
morphism ι serves to unselect2 a part of C ′ , which will become necessary later.
Definition 2 (Satisfaction). A monomorphism f : P ֒→ G satisfies W a condi-
tion c : P , denoted f |= c, iff c = true, c = ¬c′ and f 6|= c′ , or c = j∈J cj and
there is a j ∈ J such that f |= cj , or c = ∃(a, ι, c′ ) (a : P ֒→ C ′ , ι : C ֒→ C ′ ,
c′ : C) and there exists a monomorphism q : C ′ ֒→ G such that f = q ◦ a and
q ◦ ι |= c′ .
a ι
∃(P C′ C, c′ )
f q |=
q◦ι
G
A graph G satisfies a condition c : ∅ iff the unique morphism ∅ ֒→ G satisfies c.
In the diagram of Def. 2, the triangle indicates that C is the type of the subcon-
dition c′ which appears nested inside ∃(α, ι, c′ ).
Remark 1 (No Added Expressivity). Our conditions with ι are equally expres-
sive as the nested conditions defined in [12]. The proof, which we omit here,
relies on the transformation A from [12].
Definition 3. ≡ denotes logical equivalence, i.e. for conditions c, c′ : P , c ≡ c′
iff for all monomorphisms m with domain P , ⇒ m |= c ⇔ m |= c′ .
Notation. As one can see in Fig. 1, the notation for graph conditions custom-
arily only depicts source or target graphs of morphisms. The tiny blue numbers
1
when we mention “type graphs” in the text, we just mean graphs used as types.
2
We will use the term “unselection” anytime a morphism is used in the inverse direc-
tion: in Def. 1, the morphism ι is used to base subconditions on a smaller subgraph,
in effect reducing the selected subgraph; it will also appear in our definition of graph
programs as the name of an operation that reduces the current selection, i.e. the
subgraph the program is currently working on – similarly for “selection”.
1 2 2 2
|= ∃ ←֓ , ¬∃
Fig. 1. A nested graph condition (stating the existence of two nodes linked by an edge,
where the second node does not have a self-loop) and a graph satisfying it.
show the morphisms’ node mappings. We also adopt the convention of not explic-
itly representing the morphism ι in a situation ∃(a, ι, xi ); we prefer to annotate
the variable’s type graph with the images of items under ι in parentheses.
Next, we introduce graph transformations. We follow the double pushout ap-
proach with injective rules and injective matches. For technical reasons, we de-
fine graph transformations in terms of four elementary steps, namely selection,
deletion, addition and unselection. Deletion and addition always apply to a se-
lected subgraph, and selection and unselection allow the selection to be changed.
skip is a no-op used in the definition of sequential composition. The definition
below allows for somewhat more general combinations of the basic steps, which
cannot be expressed by sets of graph transformation rules.
The semantics of a graph program is a triple of two monomorphisms and one
partial morphism. The two monomorphisms represent the selected subgraphs
before and after the execution of the program respectively, and the partial mor-
phism records the changes effected by the program. Our programs are a proper
subset of those in Pennemann [12], and use the same semantics.
Definition 4 (Graph Programs).
In the following table, x, l, r, y, min and mout are monomorphisms, with x, l, r
and y arbitrarily chosen to define a program step, while min and mout are called
interfaces and universally quantified in the set comprehensions that appear in
the definitions below.
Name Program P Semantics JP K
selection Sel(x) {(min , mout , x) | mout ◦ x = min }
deletion Del(l) {(min , mout , l−1 ) | ∃l′ , (mout , l, min , l′ ) pushout}
addition Add(r) {(min , mout , r) | ∃r′ , (min , r, mout , r′ ) pushout}
unselection U ns(y) {(min , mout , y −1 ) | mout = min ◦ y}
skip skip {(m, m, iddom(m) ) | m ∈ M}
If P and Q are graph programs, then so are their disjunctive {P, Q} and sequen-
tial (P ; Q) composition. The semantics of disjunction is a set union JP K ∪ JQK
and the semantics of sequence is JP ; QK = {(m, m′ , p) | ∃(m, m′′ , p′ ) ∈ JP K,
(m′′ , m′ , p′′ ) ∈ JQK, p = p′ ; p′′ }, where composition p′ ; p′′ of partial morphisms
l1 r1 l2 r2 l1 ◦l2′ r2 ◦r1′
p′ = G1 ←֓ D1 ֒→ H1 , p′′ = H1 ←֓ D2′ ֒→ H2 is defined as G ←֓ D′′ ֒→ H2
′ ′
using the pullback (rS
1 , l2 ) of (r1 , l2 ). If P is a graph program, then so is its
iteration P ; JP K = j∈N JP j K where P j = P ; P j−1 for j ≥ 1 and P 0 = skip.
∗ ∗
Remark 2. The definitions generalise the state transitions in plain graph trans-
l r
formation, a rule ̺ = (L ←֓ K ֒→ R) being precisely simulated by the program
Sel(∅ ֒→ L); Del(l); Add(r); U ns(∅ ֒→ R).
3 µ-Conditions
In this section, we define µ-conditions on the basis of nested graph conditions.
These are capable of expressing path and connectivity properties, which fre-
quently arise in the study of the correctness of programs with recursive data
structures, or in the modelling of networks. We then define and prove the cor-
rectness of some basic constructions. An example is provided at the end of this
section to illustrate the constructions step by step.
3.1 Defining µ-Conditions
Nested conditions are a very successful approach to the specification of graph
properties for verification. However, they are unable to express non-local prop-
erties such as connectedness. Our idea is to generalise nested conditions to cap-
ture certain non-local properties by adding recursion. The resulting formalism
will be similar to first order fixed point logics, see e.g. Kreutzer [9]. The reader
might want to compare our µ-conditions to a distinct formalism towards ex-
pressing non-local properties, the very powerful grammar-based HR∗ conditions
of Radke [17]. We argue that µ-conditions are worth looking into despite the
availability of strong contenders for the extension of nested conditions to non-
local properties, such as MSO-conditions [16] because µ-conditions provide a
new and different generalisation of nested conditions, and neither is it immedi-
ately clear how the respective expressivities compare. The related work section,
Sec. 5, contains a summary on different non-local graph condition formalisms.
Specifically, we will show in this section that the weakest liberal precondition
transformation, core of the Dijkstra-style approach, can be adapted.
Notation. Sequences (of graphs, placeholders, morphisms) are written as bold
letters P , x, f , and their components are numbered starting from 1. The length
of a sequence P is denoted by kP k. Indexed typewriter letters x1 stand for
placeholders, i.e. variables. The notation c : P indicating that c has type P is
also extended to sequences: c : P (provided kck= kP k).
To define fixed point conditions, we need something to take fixed points of, and
to ascertain that the fixed point exists. Choosing a partial order on CondP , one
can define monotonic operators on CondP . The semantics of satisfaction already
defines a pre-order: c ≤ c′ iff every morphism that satisfies c also satisfies c′ ,
which is obviously transitive and reflexive. As in every pre-order, ≤ ∩ ≤−1 is
an equivalence relation compatible with ≤ and comparing representants via ≤
partially orders its equivalence classes. We introduce variables as placeholders
where further conditions can be substituted3 .
3
Note that in our approach variables stand only for subconditions, not for attributes
or parts of graphs. Wherever confusion with similarly named concepts from the
literature could arise, we will use the word “placeholder” meaning “variable”.
To represent systems of simultaneous equations, we work on tuples of conditions.
If P = P1 , . . . , PkP k is a sequence of graphs, then CondP is the set of all kP k-
tuples c of conditions, whose i-th element is a condition over the i-th graph of
P . Satisfaction is defined component-wise: f |= c iff ∀k ∈ {1, . . . , kP k} fk |= ck .
V W
By definition, and of countable sets of CondP conditions exist for any P ,
and they are easily seen to be least upper and greatest lower bounds of the
sets. This makes Cond≡P a complete lattice. Let CondP be ordered with the
product order by defining f |= c to be true when the conjunction holds. This
again induces a partial order on the set of equivalence classes, Cond≡P . Thus,
Cond≡P is also a complete lattice, and monotonic operators have least fixed
points by the Knaster-Tarski theorem [20], given by the limit of F n (false) for
all n ∈ N . This ensures that systems of equations as defined below yield least
fixed point solutions, which is crucial in the definition of a µ-condition. We
extend the inductive definition from Def. 1 as follows:
Definition 5 (Graph Conditions with Placeholders). Given a graph P
and a finite sequence P of graphs or morphisms, a condition with placeholders
W P over P is a (graph) condition with placeholders is either ∃(a, ι, c), or ¬c,
from
or j∈J cj , or xi , 1 ≤ i ≤ kP k where xi is a variable of type Pi .
A condition can be substituted for a variable of same type:
Definition 6 (Substitution). If P is a list of graphs and F is a condition
with placeholders x over P , then if c ∈ CondP , then F[x/c] is obtained by
substituting each occurrence of xi by ci for all i ∈ {1, ..., kPk}.
Satisfaction of conditions with placeholders by a morphism f is defined in the
obvious way relative to a valuation, which is an assignment of true or false to
each monomorphism of the type graph of the variable into cod(f ).
As discussed above, a least fixed point will be sought only up to logical equiv-
alence. To guarantee existence of the least fixed point, the operator must be
monotonic (c ≤ d ⇒ F (c) ≤ F (d) for any c, d ∈ CondP ). Monotonicity can be
enforced syntactically for substitutions by never placing a variable under an odd
number of negations, which is proved by structural induction as in fixed point
logics or the modal µ calculus.
Definition 7 (µ-Condition). Given a finite list P , if {Fi }i∈{1,...,kP k} are con-
ditions with placeholders from P , over the graphs of P respectively, then µ[P ]F
denotes the least fixed point of the operator c 7→ F [x/c].
A µ-condition is a pair (b, l) consisting of a condition with placeholders b, and a
finite list of pairs l = (xi , Fi (x)) of a variable xi : Pi and a condition Fi (x) : Pi ,
with placeholders from x, for some graph Pi , such that F is monotonic.
Notation. we write the list of pairs l = (xi , Fi (x)) as a system of equations
x = F (x). We call b the main body and l the recursive specification of (b, l), and
F (c) is understood as substitution of conditions c for the variables x.
Thus each condition with placeholders typed over P defines a unary operator
on CondP .
Remark 3 (First Example: µ-Conditions are More General than Nested).
1. µ-conditions generalise nested conditions, consequently all examples for nest-
ed conditions are examples for µ-conditions (with no variables or equations).
2. µ-conditions are strictly more general than nested conditions: the following
expresses the existence of a path of unknown length between two given nodes.
3
x1 [ ] where x1 [ ] = ∃( ) ∨ ∃( , x1 [ ])
1 2 1 2 1 2 1 2 1(3) 2(2)
It is read as follows: the word “where” separates the main body from the equa-
tions. Here, x1 is the only variable, and its type graph is indicated in square
brackets. The second existential quantifier uses a morphism to unselect node 1
and the sole edge: its source is the type graph of x1 , which is indeed syntacti-
cally required for using the variable in that place. The unselection morphism ι
is implicit in the notation, and is only expressed by adding small blue numbers
in parentheses to the node numbers in its source graph to specify the mapping.
This compact notation for ι is why the second existential quantifier in the ex-
ample has only two fields. To ease reading and writing, we adopt the convention
to always use precisely the same layout for the type graph of a given variable.
The following statement is not needed in the proofs that will follow, but it helps
motivate the use of the “unselection” morphisms. We therefore view it as justified
to leave the proof as an exercise:
Remark 4 (Why ι). A µ-condition where ι is the identity in all subconditions of
the main body and of the components Fi (x) is equivalent to a nested condition.
The following fact is well-known:
W n
Remark 5. The least fixed point of F is equivalent to n∈N F (false).
Definition 8 (Satisfaction). The µ-condition b | x = F (x) with x : P is
satisfied by a morphism f iff f |= b[x/µ[P]F].
Remark 6 (No Infinite Nesting). By the characterisation of the least fixed point
as an infinite disjunction, every µ-condition is equivalent to an infinite nested
condition. Infinitely deep nesting does not arise, because the characterisation in
Remark 5 yields a countable disjunction of finitely deeply nested conditions.
A morphism satisfies a given µ-condition iff it satisfies the finite nested condition
obtained by unrolling the recursive specification up to some finite depth and
substituting the resulting nested conditions into the main body:
Proposition 1 (Satisfaction at Finite Recursion Depth). f |= b | x =
F (x) iff ∃n ∈ N, f |= b[x/F n (false)].
Theorem 1 (Deciding Satisfaction of µ-conditions). Given a morphism
f : P ֒→ G and a µ-condition c, it is decidable whether f |= c.
3.2 Weakest Liberal Preconditions of µ-conditions
In this subsection, we present a construction to compute the weakest liberal
precondition of any given µ-condition with respect to any graph program P that
does not use iteration (“liberal” means that termination of P is not implied, and is
redundant in the absence of iteration, as only iteration causes non-termination).
Definition 9 (Weakest Liberal Precondition). The weakest liberal precon-
dition (wlp) of c with respect to the program P , wlp(P, c), is the least condition
with respect to implication such that f ′ |= c ⇒ f |= wlp(P, c) if (f, f ′ , p) ∈ JP K
for some partial morphism p.
We will show that under this assumption there is a µ-condition that expresses
precisely the weakest liberal precondition of a given µ-condition with respect to
a rule, and it can be computed. The result is similar to the situation for nested
conditions. To derive it, we use the shift transformation Am (c) from [12] whose
fundamental property is to transform any nested condition c into another nested
condition such that m′′ |= Am (c) iff m′′ ◦ m |= c for all composable pairs m′′ , m
of monomorphisms (Lemma 5.4 from [12]). Since this and similar constructions
play an important role in this section, we recall here the case c = ∃(a, c′ ): if
(m′ , a′ ) is the pushout of (m, a), let Epi be the set of all epimorphisms e with
domain cod(m′ ) thatWcompose to monomorphisms b := e ◦ a′ and r := e ◦ m′ .
Then Am (∃(a, c′ )) = e∈Epi ∃(b, Ar (c′ )).
With help of the unselection ι in ∃(a, ι, c), it is at first glance trivial to exhibit
a weakest liberal precondition with respect to U ns(y). However, to handle the
addition and deletion steps, a construction becomes necessary that makes the
affected subgraph explicit again. This information is crucial to obtain the weakest
liberal precondition with respect to Add(r) and Del(l) and must not be forgotten
at any nesting level in order to obtain the correct result. To that aim, we define
a partial shift construction which makes sure that the type graph of the main
body is never unselected in the µ-condition but is instead mapped in a consistent
way as a subgraph of the type graph of each variable. The following serves to
obtain the new type graphs containing the type of the main body:
Construction 1 (New type graphs for partial shift).
We assume that an arbitrary total order on all graph morphisms is fixed. If
c = b | µ[K]F is a µ-condition, then for a variable xi of K, XR,c (xi ) is defined
as the sequence of morphisms f obtained as below, in ascending order.
The morphisms f are obtained from P ′ by collecting all epimorphisms that
compose to monomorphisms with the pushout morphisms in the diagram:
∅ Pi
R X Pj′
f
Construction 2 (Partial shift).
Given monomorphisms x : G ֒→ H and y : R ֒→ H,
Px,y (b | µ[K]F) := Px,y (b) | µ[K ′ ]F ′ , where the new list of variables K ′ and
their respective types P ′ are obtained by concatenating all XR,c (xi ) of the vari-
ables of K in order. where the new variables and equations are obtained by
applying Pf,y to the variables of the left hand sides with all possible morphisms
f from R, as below, and accordingly to the right hand sides.
Px,y (xi ) = xyi if xi : G, where xyi : H is a new variable, H = cod(y).
a ι
Px,y (∃(P ֒→ C ′ ←֓ C, c′ )) is constructed as follows: Let Epi be the set of all
epimorphisms e with domain H ′ that compose to monomorphisms rW= e ◦ x′ and
b = e ◦ h with the pushout morphisms. Px,y (∃(P ֒→ C ′ ←֓ C, c′ ) = Epi ∃(H ֒→
E ←֓ J, Pi,y′ (c′ )): for each member of the disjunction, form the pullback of r ◦ ι
and b ◦ y, then pushout the obtained morphisms to (y ′ , i) as in the diagram:
B
ι
P C′ C c′
x x′ i
ι′
H h H′ J
y e
R y′ E
Boolean combinations of conditions are transformed to the corresponding com-
binations of the transformed members.
Remark 7 (Ambiguous Variable Contexts). Note that in a µ-condition it is
not necessarily true that in all contexts where xi is used, it appears with the
same morphism R ֒→ Pi (where R is the type of b). It is however possible to
equivalently transform every µ-condition into a “normal form” that has that
property. Applying PidR ,idR will by construction result in a µ-condition with
unambiguous inclusions R ֒→ Pi for all variables (namely the morphisms from
the sequences XR,c ), and this property is also preserved by the constructions
introduced later in this section. Unreachable variables created by X and P can
be pruned to obtain an equivalent, but sometimes smaller µ-condition.
Equivalence of conditions with placeholders (unlike µ-conditions) is only defined
for conditions using the same sets of variables, as equivalence in the sense of
nested conditions for each valuation. We extend A to conditions with placehold-
ers by defining Am (x) to be ∃(idcod(m) , m, x) if x : P .
One can show that Px,y is equivalent to Ax . The reason for introducing Px,y is
that it allows precise control over the types of the variables in the transformed
condition, which should include the type of the main body. Intuitively, as this
corresponds to the currently selected subgraph of a graph program, additions
and deletions are applied to that subgraph and one must make sure that the
changes apply to the whole µ-condition to obtain the correct result.
Lemma 1. The conditions Px,y (c) and Ax (c) are equivalent.
We introduce the transformations δ ′ m (c), α′ m (c) (based on auxiliary transfor-
mations δm,y (c) and αm,y (c), repsectively), which are used in the computation
of the weakest liberal precondition (with respect to addition and deletion, re-
spectively4 ), of a µ-condition that has already undergone partial shift:
Definition 10 (Transformations δ ′ and α′ ).
Let c : G be a condition with placeholders. If r : K ֒→ R and y : R ֒→ G (resp.
l : K ֒→ L and y : K ֒→ G) are monomorphisms, W thenWδr,y (c) (αl,y (c)) is defined
as follows: δr,y (¬c) = ¬δr,y (c)
W and δ r,y ( Wj∈J c j ) = j∈J δr,y (cj ) (respectively:
αl,y (¬c) = ¬αl,y (c) and αl,y ( j∈J cj ) = j∈J αl.y (cj )).
For c = ∃(a, ι, c′ ), the following constructions are used:
a′ ι′ δr,y′′ (c′ ) a′ ι′ δr,y′′ (c′ )
W X ′
V W X ′′V
′′ r ′′′ ′ l ′′′
h r r h l l
K P a C′ ι C c′ L P a C′ ι C c′
y y
r l
R y′ K y′
Case of δm,y (c′ ): If the pushout complement of r and a ◦ y does not exist, then
δm,y (c) = f alse. Otherwise, obtain it as x′ and r′ and pullback (a, r′ ) to (a′ , r′′ )
with source W ; this yields a morphism h from K to W to make the diagram
commute and the special PO-PB lemma [3] applicable. Pullback (ι, r′ ) to (ι′ , r′′′ ),
(x′ = a′ ◦ h) and let δm,y (c) = ∃(a′ , ι′ , δm,y′′ (c′ )) (the pullback property yields
existence and uniqueness of y ′′ between K and V to make it commute).
Case of αl,y (c): pushout (y, l) to (l′ , h); pushout (l′ , a) to (l′′ , a′ ); pullback (a′ ◦
h, l′′ ◦ι) and pushout to (y ′′ , l′′′ ) over the pullback (not drawn in the figure except
for the morphism l′′′′ ). The commuting morphism from the pushout object V to
X fills in to yield ∃(a′ , ι′ , αl,y (c′ )). The commuting morphism from L to V is y ′′ .
For variables, δm,y (xi ) = x′i is a new variable of type K, likewise αm (xi ) has type
L (see Rem. 7). Finally, δ ′ m (c) = δm,id (Pid,id (c)) and α′ m (c) = αm,id (Pid,id (c)).
In contrast to P, the transformations α′ and δ ′ leave the number of variables
unchanged. Only the types of the variables are modified. We recall that for any
l : K ֒→ L, there is a condition ∆(l) that expresses the possibility of effecting
Del(l), i.e. ∆(l) is satisfied exactly by the first components of tuples in JDel(l)K.
We describe ∆(l) only informally: f |= ∆(l) states the non-existence of edges
that are in im(f ) but incident to a node in im(f ) − im(f ◦ l).
Theorem 2 (Weakest Liberal Precondition for µ-conditions). For each
rule ̺, there is a transformation Wlp̺ that transforms µ-conditions to µ-condi-
tions and assigns to each condition c such that m′ |= c another condition Wlp̺ (c)
4
The letters were chosen so as to indicate the effect of the transformation: to compute
the weakest precondition with respect to addition, δ ′ needs to delete portions of the
morphisms in the condition, and vice versa.
with the property that m |= Wlp̺ (c) whenever (m′ , m, p) ∈ J̺K and Wlp̺ (c) is
the least condition with respect to implication having this property.
3.3 A Weakest Liberal Precondition Example
In this subsection, we construct a weakest liberal precondition of a µ-condition
step by step. Figure 2 shows a single-rule graph program which matches a node
with exactly one incoming and one outgoing edge and replaces this by a single
edge. The effect of the rule is to contract paths, and it can be applied as long as
no other edges are attached to the middle node.
3
Sel ∅ ֒→ ; Del ←֓ ; Add ֒→ ; U ns ←֓ ∅
1 2 1 2
Fig. 2. A path-contracting rule ̺contract = Selc ; Delc ; Addc ; U nsc .
Figure 3 shows a µ-condition whose weakest liberal precondition we wish to
compute. It is a typical example of a µ-condition, which evaluates to true on
those graphs that contain some node which has a path to every other node.
3
∃( , ∀( , x1)) where x1[ ] = ∃( ) ∨ ∃( , x1[ ])
1 1 2 1 2 1 2 1 2 1(3) 2(2)
Fig. 3. A µ-condition c3 = (b, l) expressing the existence of a node from which there
exists a path to every other node.
3
∃( ←֓ ∅, ∀( , x1)) where x1 [ ] = ∃( ) ∨ ∃( , x1 [ ])
1 2 1 2 1 2 1 2 1 2 1(3) 2(2)
Fig. 4. Wlp(U nsc , c3 ). Note that the nodes under the universal quantifier are not the
same as those of the outer existential quantifier, as these have been unselected: the
type of the subcondition ∀(...) is ∅.
In Figures 5 and 6, a partial shift has been applied to the condition of Fig-
ure 4 (Wlp(U nsc , c3 )), and the modifications the condition undergoes in the
computation of the weakest precondition with respect to Addc and Delc are
highlighted in various colours (see Figure 7 for a legend). Construction 1 has
yielded a new list of variables5 , x1 , ..., x7 , the corresponding equations are shown
in 6. Note that the representation is somewhat further abbreviated: type graphs
of variables are suppressed in the notation in subconditions ∃(a, ι, xi ), when the
mapping ι from the type graph to the target of a is the identity. No other sim-
plifications were applied. We have highlighted in yellow and red the type of the
main body of Wlp(U nsc , c3 ) throughout Figure 5; edges that are highlighted in
red are deleted to compute Wlp(Addc , Wlp(U nsc , c3 )) as per Def. 10; the edges
and nodes highlighted in red are not present initially, but added to compute
Wlp(Delc , Wlp(Addc , Wlp(U nsc , c3 ))) as per Def. 10, which is obtained by con-
joining ∆(l) to the main body (which we have not represented, as it is easy
to compute and would only encumber the illustration). In the end, a universal
5
Although the original µ-condition needed only one variable, the partial shift yields
a µ-condition with multiple variables in general.
3 4
3 3 3 3
∃ 1 2
,∀ 1 2
, x7 ∧ ∀ , x6 ∧ ∀ 1 2
, x4 ∧ ∀ 1 2
, x5 ∧ ∀ 1 2
, x3
1 2
5 5 5
5 5 5
∧∀ 1 2
, x1 ∧ ∀ 1 2
, x2
5 5
Fig. 5. Construction of Wlp(Delc ; Addc ; U nsc , ̺c ): main body (variables: see below).
h i 3 h 3(3) i
x1 1 2
=∃ ∨∃ 1 2
∨∃ 1 2
, x6
1 2 1(1) 2(2)
5 5
h i 35 h 3(3) 5 i 5(5)
x2 1 2
=∃ 1 2
∨∃ 1 2
, x5
1(1) 2(2)
5 5 5 5(5)
h3 i 3 3 h 2(3) 3(4) i 3(3) h 3(3) i
4
x3 1 2
=∃ 1 2
∨∃ 1 2
, x7 ∨∃ 1 2
, x4 ∨
1(1) 2(2) 1(1) 2(2)
5 5 5 5(5) 5 5(5)
3(3) h 3(3) i
∃ 1 2
, x4
1(1) 2(2)
5 5(5)
h3 i 3 3 h 3(4) 2(3) i 3(3) h 3(3) i
4
x4 1 2
=∃ 1 2
∨∃ 1 2
, x7 ∨∃ 1 2
, x3
1(1) 2(2) 1(1) 2(2)
5 5 5 5(5) 5 5(5)
h3 i 3 3 h 3(4) i 3 h i
4
x5 =∃ ∨∃ 1 2
, x5 ∨∃ 1 2
, x2
1 2 1 2 1(1) 2(2) 1(1) 2(2)
5 5 5 5(5) 5 5(5)
h3 i 3 3 h 3(4) i 3 h 1(1) 2(2) i
4
x6 1 2
=∃ 1 2
∨∃ 1 2
, x6 1 2
∨∃ 1 2
, x1
5 5 5 5 5 5(5)
h2 i 3 4 3 h 2(3) 3(5) i 3 h 3(4) i
3 4 5 4
x7 1 2
=∃ 1 2
∨∃ 1 2
, x7 ∨∃ 1 2
, x4 ∨
1(1) 2(2) 1(1) 2(2)
5 5 5 5(5) 5 5(5)
3 h 3(4) i
4
∃ 1 2
, x3
1(1) 2(2)
5 5(5)
Fig. 6. Construction of Wlp(Delc ; Addc ; U nsc , ̺c ): equations for the variables.
node/edge decoration meaning
items (nodes and edges) selected for W lp(U ns(y), c)
items to be deleted to obtain W lp(Add(r), c)
items to be added to obtain W lp(Del(l), c)
Fig. 7. Legend for Figure 5.
quantification with morphism ∅ ֒→ L completes the weakest precondition with
respect to the rule, as in the construction for nested conditions [12].
The outer existential quantifier in Fig. 5, where the unselection morphism is not
shown in the abbreviated representation, is really as in Fig. 8:
∃ ←֓ , ...
1 2 1 2
5 5
Fig. 8. Outer nesting level of the conditions in Fig. 5
When following the construction through the nesting levels, please keep in mind
that one may sometimes choose among isomorphic pushout objects and the
numbers of new nodes are arbitrary, but the nodes 1, 2 and (as created by
the transformation α′ ) 5 are never “unselected” and therefore present in every
type graph occurring in the weakest preconditions, similarly for the edges (not
numbered because their mapping is unambiguous in the example).
4 Correctness Relative to µ-conditions
In the previous section, we have shown how the weakest liberal precondition
construction for nested conditions carries over to µ-conditions. The next task is
to develop methods for deducing correctness relative to µ-conditions and extend
the proof calculus, for which we offer a partial solution in this section.
The soundness of Pennemann’s calculus K has been established in the publica-
tions introducing them, and recently a tableaux based completeness proof of K
was published [11]. The proof rules of K are easily seen to be sound for µ-condi-
tions as well, however the recursive definitions requires an extension.
For our calculus Kµ , we adopt the resolution-style rules of K and add an induc-
tion principle to deal with certain situations involving fixed points. This proved
to be sufficient to handle all the situations encountered in the examples.
We employ a sequent notation: the inference rules manipulate objects Ctx : Γ ⊢
∆, with the intended meaning that the disjunction of ∆ can be deduced from
the conjunction of Γ in the context Ctx. The context Ctx is a pair of a left
hand side of a sequent and an operator on µ-conditions. Γ and ∆ are sets of
expressions, which differ from conditions in that identifiers can be used for the
main bodies of µ-conditions and both these and variables can be annotated with
(n)
their recursion depth, an implicitly universally quantified natural number. xi
(n)
then stands for Fi (false) and an auxiliary rule permits to unroll it to the i-th
component of F applied to x(n−1) .
The induction rule announced above is (where Hi,j for each i ∈ {1, ..., kIk},
j ∈ {1, ..., kJk} is any condition with placeholders):
∀i,j
z }| {
(m) (n)
xi ∧ ¬yj ⊢ ⊢
(m−1) (n) (m−1) (n)
Hi,j (x1 ∧¬y1 , ..., xkIk ∧¬ykJk ) ∀i, j Hi,j (false) = false
W
xi ∧ ¬yi ⊢ false
(IndMuEmpty)
Theorem 3. Kµ := K ∪ {IndMuEmpty} is sound.
There are a number of details hidden in the discussion above: Boolean operations
must be lifted to µ-conditions, which entails variable renaming and union of the
systems of equations; rules for exploiting logical equivalences between different
Boolean combinations are necessary to equivalently transform conditions into a
form suitable for the application of the rules of K; in [12], each Boolean combi-
nation appearing inside a nested condition is put into conjunctive normal form
prior to the application of rules. Proof trees in the sequent-style calculus Kµ start
with instances of the axiom (A ⊢ A is derivable by a rule with no antecedents),
and make use of all the classical sequent rules [5] not involving quantifiers.
Our handling of nested contexts relies on substitutions: a context is a pair of a
left hand side of a sequent, and a graph condition with a special variable. The
rule for manipulating the context is usable both ways:
⊢ Ctx(x)
(Ctx)
Ctx ⊢ x
5 Related Work
Recently, Poskitt and Plump [16] have presented a weakest precondition calculus
for a different extension of nested conditions (monadic second-order conditions)
and demonstrated how to use it in a Hoare logic. The method is arguably closer
to reasoning directly in a logic and less graph condition like, but seems successful
at solving some of the same problems in a different way. HR∗ conditions [17] are
another approach towards the same goal; they have already been mentioned in
the main text and recently there has been an effort at extending the weakest
precondition calculus to a subclass including path expressions. Verification of
graph transformation system has also been performed within general-purpose
theorem proving environments by Strecker et al. [19, 13], with positive path
conditions. Verification of graph transformation systems via model checking of
abstractions, as opposed to the prover-based approach pursued here, can be
found in Gadducci et al., Baldan et al., König et al., Rensink et al. [4, 1, 10, 18].
A summary overview of graph conditions for non-local properties is attempted
below (a proof calculus is presented in [16] but completeness of a proof calculus
has only recently been obtained by Lambers and Orejas [11] for nested conditions
and remains to be researched for the other approaches). Note that while HR∗
conditions are known to properly contain the monadic second-order definable
properties [17] and nested conditions are a special case of each of the other
three, we have not yet been able to separate µ-conditions from MSO or HR∗ :
reference [12] (here) [17] [16]
conditions Nested µ- HR∗ MSO-
wlp yes yes incomplete6 yes
complete proof calculus yes future work
theorem prover yes future work
6
Radke, personal communication.
6 Conclusion and Outlook
We have introduced µ-conditions and achieved several results, mainly a weakest
liberal precondition transformation (Theorem 2), soundness of a proof calculus
(Theorem 3), and discussed correctness relative to µ-conditions, which appears
to be a fruitful ground for further investigations.
In analogy to the equivalence between first-order predicate graph logic and nested
graph conditions, we are investigating whether µ-conditions have the same ex-
pressivity as fixed point extensions to classical first-order logic for finite graphs.
Also, the expressivity of HR∗ conditions [17] or even MSO likely surpasses that
of µ-conditions, but the precise relationship remains to be examined. As the
examples show, the weakest precondition calculus (which is still a research ques-
tion for HR∗ conditions [17] but readily available by logical means in the MSO-
conditions formalism [16]) produces quite unwieldy expressions due to partial
shift. The blowup is exponential in the size of the interface graphs used in the
rule, and seems unavoidable because of the need to use a fixed set of type graphs
for the finitely many variables (and a blowup is also inherited from the weak-
est precondition calculus of [12]). Rule IndMuEmpty also contributes because
it involves a Cartesian product between variable sets. We have devised heuris-
tics to simplify the expressions, but even if many of the cases can be resolved
automatically, this issue still raises concerns as to the practical applicability.
Future work will also include tool support with special attention to semi-auto-
mated reasoning, based on the reasoning engine Enforce implemented in [12].
To extend the weakest liberal precondition construction to programs with it-
eration, one would have to provide, or have the prover attempt to determine,
an invariant, as in the original work of Pennemann; to obtain termination, one
could proceed as in [14] and prove a termination variant. We plan a further
generalisation to correctness under adverse conditions, i.e. systems subject to
environmental interference, also modelled as a graph program. Furthermore, it
appears that µ-conditions might readily generalise to temporal properties, even
with the option to nest temporal operators inside quantifiers, which would al-
low properties such as the preservation of a specific node to be expressed (but
require further proof rules). This could be achieved by introducing a temporal
next operator parameterised on atomic subprograms (the basic steps of Def. 4)
and since in the semantics of these program steps the relationship between the
interfaces is deterministic, this would again confer an unambiguous type to such
an expression and make it suitable for use as a subcondition. Whether this offers
any new insights remains to be seen. Eventually, we would also like to deal with
algebraic operations on attributes and extend our work to a practical verification
method that separates the graph specific concerns from other aspects and allows
proofs of properties that depend on both, for example involving data structures
whose elements should remain ordered. Finally, the limitations imposed by un-
decidability prompt the search for of restricted decidable classes.
Acknowledgements
We thank Annegret Habel, many other members of SCARE and the anonymous
reviewers for constructive criticism of the approach and the paper.
References
1. Baldan, P., König, B., König, B.: A logic for analyzing abstractions of graph trans-
formation systems. In: Static Analysis, pp. 255–272. Springer (2003)
2. Dijkstra, E.W.: A discipline of programming. Prentice Hall (1976)
3. Ehrig, H., Ehrig, K., Prange, U., Taentzer, G.: Fundamentals of Algebraic Graph
Transformation. Monographs in Theoretical Computer Science, Springer (2006)
4. Gadducci, F., Heckel, R., Koch, M.: A fully abstract model for graph-interpreted
temporal logic. In: TAGT’98. LNCS, vol. 1764, pp. 310–322 (1998)
5. Gentzen, G.: Untersuchungen über das logische Schließen. I. Mathematische
Zeitschrift 39(1), 176–210 (1935)
6. Habel, A., Pennemann, K.H.: Correctness of high-level transformation systems
relative to nested conditions. Math. Struct. in Comp. Sci. 19(2), 245–296 (2009)
7. Habel, A., Pennemann, K.H., Rensink, A.: Weakest preconditions for high-level
programs. In: ICGT 2006. LNCS, vol. 4178, pp. 445–460 (2006)
8. Hoare, C.A.R.: An axiomatic basis for computer programming. Communications
of the ACM 26(1), 53–56 (1983)
9. Kreutzer, S.: Pure and Applied Fixed-Point Logics. Ph.D. thesis, Dissertation the-
sis, RWTH Aachen (2002)
10. König, B., Kozioura, V.: Counterexample-guided abstraction refinement for the
analysis of graph transformation systems. LNCS, vol. 3920, pp. 197–211 (2006)
11. Lambers, L., Orejas, F.: Tableau-based reasoning for graph properties. In: Graph
Transformation, LNCS, vol. 8571, pp. 17–32 (2014)
12. Pennemann, K.H.: Development of Correct Graph Transformation Systems. Ph.D.
thesis, Universität Oldenburg (2009)
13. Percebois, C., Strecker, M., Tran, H.N.: Rule-level verification of graph transfor-
mations for invariants based on edges’ transitive closure. In: SEFM 2013. LNCS,
vol. 8137, pp. 106–121 (2013)
14. Poskitt, C.M.: Verification of Graph Programs. Ph.D. thesis, University of York
(2013)
15. Poskitt, C.M., Plump, D.: Verifying total correctness of graph programs. Electronic
Communications of the EASST 61 (2013)
16. Poskitt, C.M., Plump, D.: Verifying monadic second-order properties of graph pro-
grams. In: Graph Transformation, LNCS, vol. 8571, pp. 33–48 (2014)
17. Radke, H.: HR* graph conditions between counting monadic second-order and
second-order graph formulas. Electronic Communications of the EASST 61 (2013)
18. Rensink, A., Distefano, D.: Abstract graph transformation. ENTCS 157(1), 39–59
(2006)
19. Strecker, M.: Modeling and verifying graph transformations in proof assistants.
ENTCS 203(1), 135–148 (2008)
20. Tarski, A.: A lattice-theoretical fixpoint theorem and its applications. Pacific J.
Math. 5(2), 285–309 (1955)