=Paper= {{Paper |id=Vol-2046/gregorics-borsi |storemode=property |title=Corrections to the program verification rules |pdfUrl=https://ceur-ws.org/Vol-2046/gregorics-borsi.pdf |volume=Vol-2046 |authors=Tibor Gregorics,Zsolt Borsi }} ==Corrections to the program verification rules== https://ceur-ws.org/Vol-2046/gregorics-borsi.pdf
             Corrections to the program verification rules

                                 Tibor Gregorics                      Zsolt Borsi
                                 gt@inf.elte.hu                    bzsr@inf.elte.hu
                                            ELTE Eötvös Loránd University
                                               Faculty of Informatics
                                                 Budapest, Hungary




                                                        Abstract
                       The subject of this paper is a program verification method that takes
                       into account abortion caused by partial functions in program state-
                       ments. In particular, boolean expressions of various statements will be
                       investigated that are not well-defined. For example, a loop aborts if
                       its execution begins in a state for which the loop condition is unde-
                       fined. This work considers the program constructs of nondeterministic
                       sequential programs and also deals with the synchronization statement
                       of parallel programs introduced by Owicki and Gries [Owi76]. The syn-
                       tax of program constructs will be reviewed and their semantics will be
                       formally defined in such a way that they suit the relational model of
                       programming developed at Eötvös Loránd University [Fot88, Fot95].
                       This relational model defines the program as a set of its possible exe-
                       cutions and also provides definition for other important programming
                       notions, like problem and solution. The proof rules of total correct-
                       ness [Dij76, Fot05, Gri81, Hoa69, Owi76] will be extended by treating
                       abortion caused by partial functions. The use of these rules will be
                       demonstrated by means of a verification case study.




1    Introduction
In mathematics, a partial function is a binary relation from X to Y that does not map every element of X to
an element of Y . It is well-known that the expression a/b is not defined if the divisor b is zero. But even the
subtraction x − y may have no defined value, it occurs if x and y are natural numbers and x is less than y, and
the expected value also should be a natural number. More precisely, f : N × N → N (where f (x − y) = x − y)
is a partial function, because not every element of the set N × N is in the domain of f . The value of a partial
function is undefined when its argument is out of the domain of the partial function.
   In programming, an attempt to divide by zero is handled in various ways depending on the programming
environment. It either leads to a compile-time error or produces catchable runtime error if it happens at runtime.
In general, evaluation to an undefined value may lead to exception or undefined behaviour. Programmers
encounter with expressions on a daily basis, when they are constructing statements. The most commonly used
form of the assignment statement sets the value of an expression to a variable. Particularly, boolean expressions
are concerned when one writes loop condition or conditions of an alternative command. The value of these
expressions is not certainly defined mathematically.

Copyright c by the paper’s authors. Copying permitted for private and academic purposes.
In: E. Vatai (ed.): Proceedings of the 11th Joint Conference on Mathematics and Computer Science, Eger, Hungary, 20th – 22nd of
May, 2016, published at http://ceur-ws.org




                                                              85
    There are programs that have to work without any error. To be able to reason about the correctness of such
programs we need to have rigorous definition of the semantics of the language of these programs. We also need
methods that allow us to verify the correctness of programs. When one provides the semantics of a program
construct, the functions that are used to build the programming statement (loop condition for example) are
assumed to be total functions. The verification methods also consider program descriptions where these functions
are well-defined.
    This paper focuses on partial functions in program descriptions. In particular, not well defined boolean
expressions of various statements will be considered. They will be taken into account when providing the
semantics of statements. The semantics of program constructs are given in such a way that they suit an existing
relational programming model. This model defines the basic concepts of programming, for example defines
the program as a set of its possible executions. A verification method will be presented as well, that handles
statements containing partial logical functions. Some rules of the verification method are well known, the new
rules will be given along with their proof.
    The rest of this paper is organized as follows. Section 2 reviews how partial functions are handled in the
literature. Section 3 introduces keywords that are allowed to use to build programs we want to investigate. Next,
the semantics of these elementary programs and construct are provided. The mentioned relational programming
model is also presented here shortly. Then we provide verification rule for each statement that can be formed
from our keywords. Section 4 presents a verification example and illustrates the use of the verification rules given
in the previous section. Section 5 summarizes our approach and work.

2     Related work
Z. Manna in [Man74] presents a verification method for flowchart programs. A flowchart program is a diagram
constructed by edges and nodes, where the nodes denote statements. y ← g(x, y) stands for an assignment
statement where g(x, y) is a total function mapping Dx × Dy into Dy .
   C. A. Hoare in [Hoa69] did not mention that the conditions of the alternative or loop constructions were total
logical functions but his examples showed this.
   K. R. Apt and E.-R. Oderog use total logical functions namely Boolean expressions in their work. For
example, to ensure that the expressions x div y and x mod y are total they additionally stipulate x div y = 0
and x mod y = x for the special case of y = 0 [Apt97].
   D. Gries in his fundamental work on investigating program correctness states that the guards β1 , . . . βn have
to be well-defined boolean expressions to make sure that the alternative command avoids abortion. However, in
his verification rules, by assuming that all boolean expressions used in program descriptions (i.e. loop conditions,
guards of guarded commands) are well-defined, he eliminates this condition to make the verification rules simpler
[Gri81].
   Williem-Paul de Roever et al. extend Floyd’s inductive assertion method for proving sequential transition
systems. In sequential transition systems edges are labelled by commands in the form of c → f . In their work
c has to be a total boolean condition, but partial state transformations that might lead to runtime error (for
example c → x := 1/y where c is the guard of the command x := 1/y) are allowed to use. They present a method
for proving that the execution of a given program will not apply undefined operations [Roe01].

3     Theoretical background
3.1   Syntax
A parallel nondeterministic program (let S denote it) can be described with a finite string of symbols including
the keywords skip, abort, if, fi, while, do, od, [, ], await, then, ta, parbegin, k and parend, which is
generated by the following grammar:


                         S =skip | abort | v := f (v) | [S0 ] | await β then S0 ta |
                             S1 ; S2 | if π1 → S1  . . .  πn → Sn fi | while π do S0 od |
                             parbegin S1 k · · · k Sn parend

where v denotes the current variables, π, π1 , . . . , πn , β are partial logical functions over the current state space,
S0 , S1 , . . . , Sn are programs. The skip is the empty program (doing nothing), abort is the wrong program




                                                           86
(resulting fail), the v := f (v) is the nondeterministic assignment, the (S1 ; S2 ) is the sequential compisition, the
if π1 → S1  . . .  πn → Sn fi is the nondeterministic conditional statement, the while π do S0 od is the loop,
the [S0 ] is the atomic region, the await β then S0 ta is the await-statement, and parbegin S1 k · · · k Sn parend
is the parallel composition.

3.2   Semantics
Before the semantics of the elementary programs and the program constructions described above are shown the
concept of the program must be clarified.
    All concepts of our programming model as like as the concept of the program are based on the state space.
The concept of the state space has already been interpreted in several ways. For many people, the state space is
a model of a von Neumann type of computer, others, e.g. Dijkstra [Dij76], associate this notion with the problem
to be solved where a state is a compound of the values of the main data types. So, the program is “outside” of
the state space operating on it. In our programming model, this second meaning is used. In [Fot88], the notion
of the state space is a Cartesian product of the type value set of data types. The only mistake of this obvious
definition is that it imposes an order on the components. In [Fot05, Gre12], this mistake has been repaired.
    A program is the complex of its executions. An execution is a sequence of states. A program, by definition,
can always begin, i.e. at least one execution has to start from each state. The program is nondeterministic
because several executions may start from the same state and nobody knows which execution will happen. The
first state (start state) of all executions and the last, if the execution is finite, are in the so called base state
space. Namely the state space can be permanently changed; the inner states of the executions may have got
new components because the program can create and destroy new components (variables) during its execution,
so the state space changes dynamically. Two constraints are given: all new components have to be destroyed at
the termination, at the very latest, but the base variables should never be removed. The current state always
contains the components of the base state space. The variables of the base state space are the base variables;
the other variables are the auxiliary variables of the program. Thus the base state space is always a subspace
of the current state space. The case when the execution of a program goes wrong will be denoted by a finite
sequence of states where the last state is the fail.
    A sequence can be given by the enumeration of its elements between the signs ”<” and ”>”: < e1 , e2 , · · · >.
We will use the interconnection of two sequences if the end of the first sequence is identical to the front of the
second one. More precisely, if α =< a1 , . . . , an > and β =< b1 , b2 , · · · > are sequences and an = b1 6= fail, then
their interconnection is α ⊗ β =< a1 , . . . , an , b2 , · · · >.
    The formal definition of the program [Gre12] requires some notions. Let H ∗∗ denote the set of all finite and
infinite sequences of the elements of set H. H ∞ includes the infinite sequences; H ∗ contains the finite ones. So,
H ∗∗ = H ∗ ∪ H ∞ and H ∗ ∩ H ∞ = ∅. The length of the sequence α ∈ H ∗∗ is |α|, the value of which is ∞ if the
sequence is infinite.
Definition 1. Let A be the so-called
                                  S base state space and Ā be the set of all states which belong to the state spaces
B whose subspace is A, i.e. Ā =     B. Ā does not contain the state fail. The relation S ⊆ A × (Ā ∪ {fail})∗∗
                                   A≤B
is a program over A, if
 1. DS = A
 2. ∀a ∈ A and ∀α ∈ S(a) : |α| ≥ 1 and α1 = ai
 3. ∀α ∈ RS and ∀i(1 ≤ i < |α|) : αi 6= fail
 4. ∀α ∈ RS : |α| < ∞ → α|α| ∈ A ∪ {fail}
   Now, we are going to give the semantics of the elementary programs and program constructions so that they
are treated as programs in the sense of the previous definition. Our aim is to define the set of state-sequences
that are mapped to an arbitrary state by a construction.


Definition 2. Let A be a state space and σ ∈ A be the current state.

skip(σ) ::= {< σ >}




                                                          87
Definition 3. Let A be a state space and σ ∈ A be the current state.

abort(σ) ::= {< σ, fail >}


Definition 4. Let A be a state space and f ⊆ A × A be a relation and σ ∈ A be the current state.
                    (
                      {< σ, σ 0 >| σ 0 ∈ f (σ)} if σ ∈ Df
(v := f (v))(σ) ::=
                      {< σ, fail >}             if σ ∈
                                                     / Df

Definition 5. Let A be the common base state space of the program S1 and S2 . Let σ ∈ A be the current state.
                                   ∞
(S1 ; S2 )(σ) ::= {α | α ∈ S1 (σ) ∩ A } ∪
                {α | α ∈ S1 (σ) and | α |< ∞ and α|α| = fail} ∪
                                        ∗
                {α ⊗ β | α ∈ S1 (σ) ∩ A and β ∈ S2 (α|α| )}

Definition 6. Let A be the common base state space of the program S1 . . . Sn and the conditions π1 . . . πn . Let
σ ∈ A be the current state.
                                               n
                                               S
(if π1 → S1  . . .  πn → Sn fi)(σ) ::= ω(σ) ∪ Si (σ)
                                               i=1
                                            σ∈Dπi ∧πi (σ)

             (
              {< σ, fail >}     if ∃i ∈ [1..n] : σ ∈
                                                   / Dπi ∨ ∀i ∈ [1..n] : σ ∈ Dπi ∧ ¬πi (σ)
where ω(σ) =
               ∅                otherwise
Definition 7. Let A be the common base state space of the program S0 and the condition π. Let σ ∈ A be the
current state.
                          
                          (S0 ; while π do S0 od)(σ) if σ ∈ Dπ ∧ π(σ)
                          
(while π do S0 od)(σ) ::= {< σ >}                       if σ ∈ Dπ ∧ ¬π(σ)
                          
                            {< σ, fail >}               if σ ∈/ Dπ
                          

Definition 8. Let A be a state space and f ⊆ A × A be a relation and σ ∈ A be the current state.
[S](σ) ::= S(σ)


Definition 9. Let A be the common base state space of the program S0 and the condition β. Let σ ∈ A be the
current state.
                            
                            [S0 ](σ)
                                                             if σ ∈ Dβ ∧ β(σ)
(await β then S0 ta)(σ) ::= (skip; await β then S0 ta)(σ) if σ ∈ Dβ ∧ ¬β(σ)
                            
                              {< σ, fail >}                   if σ ∈
                                                                   / Dβ
                            

   Before the definition of the parallel composition the concept of the “uninterrupted” must be intoduced. Let
S be a program and σ be an arbitrary state of its base state space. The execution S(σ) is uninterrupted
if the program S is the skip, abort, an assignment statement, an atomic region [P ] or an await statement
await β then S0 ta where σ ∈    / Dβ or β(σ) is true. (In these cases the await statement is the abort or the
atomic region [S0 ].)
   We must remark that in case of S(σ) is not uninterrupted (where σ is an arbitrary state and S is a program),
then S can be always splitted into two programs so that S(σ) = (u; T )(σ) so that u(σ) is uninterrupted. This u
is named as the first statement of S and T is the remainder part of S relative to the state σ.

  • If S = await β then S0 ta and σ ∈ Dβ ∧ ¬β(σ), then u is skip and T is await β then S0 ta.
  • If S = (S1 ; S2 ) and S1 (σ) is uninterrupted, then the u is S1 and T is S2 .
  • If S = (S1 ; S2 ) and S1 (σ) is not uninterrupted, then u is the first statement of S1 and T is the sequence
    (T1 ; S2 ) where T1 is the remainder part of S1 .




                                                            88
  • If S = if π1 → S1  . . .  πn → Sn fi and one of its conditions is not defined or all conditions are false in σ
    (∃i ∈ [1..n] : σ ∈
                     / Dπi ∨ ∀i ∈ [1..n] : σ ∈ Dπi ∧ ¬πi (σ)), then u is abort and T is abort.
  • If S = if π1 → S1  . . .  πn → Sn fi and all conditions are defined and some of them are true in σ
    (∀i ∈ [1..n] : σ ∈ Dπi ∧ ∃i ∈ [1..n] : πi (σ)), then u is skip and T is Si assuming the ith branch is selected by
    the scheduler.
  • If S = while π do S0 od and its condition is not defined (σ ∈
                                                                / Dπ ), then u is abort and T is abort.
  • If S = while π do S0 od and the condition is false in σ (σ ∈ Dπ ∧ ¬π(σ)), then u is skip and the reminder
    part of S is skip.
  • If S = while π do S0 od and the condition is true in σ (σ ∈ Dπ ∧ π(σ)), then u is skip and T is
    (S0 ; while π do S0 od).
  • If S = parbegin P1 k · · · k Pi k · · · k Pn parend and its ith branch is selected by the scheduler and Pi (σ)
    is uninterrupted, then u is Pi and T is parbegin P1 k · · · k Pi−1 k Pi+1 k · · · k Pn parend.
  • If S = parbegin P1 k · · · k Pi k · · · k Pn parend and its ith branch is selected by the scheduler and Pi (σ)
    is not uninterrupted, then u is the first statement of Pi and T is parbegin P1 k · · · k Ti k · · · k Pn parend
    where Ti is the remainder part of Pi .

Definition 10. Let A be the common base state space of the programs S1 , . . . , Sn and σ ∈ A be the current state.
                                                        n
                                                        S
(parbegin S1 k · · · k Si k · · · k Sn parend)(σ) ::=         Bi (σ)
                                                        i=1
where
         
         
          (Si ; parbegin S1 k · · · k Si−1 k Si+1 k · · · k Sn parend)(σ)
         
                if Si (σ) is uninterrupted
Bi (σ) =
         
         
          (ui ; parbegin S1 k · · · k Si−1 k Ti k Si+1 k · · · k Sn parend)(σ)
                 if Si (σ) = (ui ; Ti )(σ) where ui (σ) is uninterrupted
         

3.3     Verification
Informally, a program is correct if it satisfies the intended input/output relation. Program correctness is expressed
by so-called correctness formulas. These are statements of the form

                                                   {{Q}}S{{R}}

where S is a program and Q and R are assertions. The assertion Q is the precondition of the correctness formula
and R is the postcondition. The precondition describes the set of initial states in which the program S is started
and the postcondition describes the set of desirable final states.
   More precisely: a correctness formula is true if every excecution of S that starts in a state satisfying Q is
finite (it terminates) and its final state satisfies R. (This is the concept of the total correctness. The partial
correctness is omitted in this paper.)
   Reasoning about correctness formulas in terms of semantics is not very convenient. Hoare has introduced
a proof system allowing us to prove partial correctness of deterministic programs in a syntax-directed man-
ner, by induction on the program syntax [Hoa69]. Dijkstra, Gries and Owicki have developed this system for
nondeterministic and parallel programs [Dij76, Gri81, Owi76]. Now this system is going to be extended.
   The first six rules are well-known, they are only shown for the sake of completeness without their proofs.
Theorem 1. Let Q and R be two assertions.
                                                    Q =⇒ R
                                                 {{Q}} skip {{R}}
Theorem 2. Let Q and R be two assertions, v := f (v) be an assignment.
                                        Q =⇒ v ∈ Df ∧ ∀e ∈ f (v) : Rv←e
                                           {{Q}} v := f (v) {{R}}




                                                              89
   The Rv←e means that the components of v must be substituted for the corresponding components of e. In
that case when the relation f ⊆ A × A of the assingment is a total function, i.e., f is a (deterministic) function
mapping from A to A and Df = A, the rule of assigment can be written in the following form.

                                               Q =⇒ Rv←f (v)
                                             {{Q}} v := f (v) {{R}}

Theorem 3. Let Q and R be two assertions, S be a program.

                                                 {{Q}} S {{R}}
                                                {{Q}} [S] {{R}}

Theorem 4. Let Q and R be two assertions, S1 and S2 be two programs.

                                                  ∃Q0 : A → L
                                                {{Q}} S1 {{Q0 }}
                                               {{Q0 }} S2 {{R}}
                                             {{Q}} (S1 ; S2 ) {{R}}

Theorem 5. Let Q and R be two assertions, and S ∗ stand for the program S annotated with assertions such as
preconditions of the assignments or invariants of the loops.

                                                {{Q}} S ∗ {{R}}
                                                {{Q}} S {{R}}

   The next three rules take into consideration the cases when some logical functions of the construction is not
well-defined. These rules are the extensions of the well-known versions that use well-defined logical functions.
An assertion P will be interpreted as the set of states that satisfy P many times in the following rules. From
its context it can be decided which interpretation holds. For example, Q and R are assertions in the expression
Q =⇒ R but they are sets in Q ⊆ R.

Theorem 6. Let Q and R be two assertions, and S1 , . . . Sn be programs and π1 , . . . πn be conditions.

                                             Q =⇒ π1 ∨ · · · ∨ πn
                                              Q ⊆ Dπ1 ∩ · · · ∩ Dπn
                                     ∀i ∈ {1, . . . , n} : {{Q ∧ πi }} Si {{R}}
                                  {{Q}} if π1 → S1  . . .  πn → Sn fi {{R}}

Proof. We must show that the executions of the conditional statement starting from Q finish at R. Let q be
an arbitrary state of Q. Since Q ⊆ Dπ1 ∩ · · · ∩ Dπn and Q =⇒ π1 ∨ · · · ∨ πn , according to the definition each
execution of the conditional statement starting from q belongs to the executions of the component Si where its
condition πi is satisfied by q.
   These executions terminate in a state satisfying R because ∀i ∈ {1, . . . , n} : {{Q ∧ πi }}Si {{R}} thus
{{Q}} if π1 → S1  . . .  πn → Sn fi {{R}} holds.

Theorem 7. Let Q and R be two assertions, and S0 be a program and π be a condition.

                                           ∃I : A → L and ∃t : A → Z
                                                   Q =⇒ I
                                                     I ⊆ Dπ
                                                 I ∧ ¬π =⇒ R
                                                I ∧ π =⇒ t ≥ 0
                                               {{I ∧ π}} S0 {{I}}
                                   ∀c0 ∈ Z : {{I ∧ π ∧ t = c0 }} S0 {{t < c0 }}
                                        {{Q}} while π do S0 od {{R}}




                                                       90
Proof. We need to prove that the executions of the loop starting from Q are finite and they finish at R. Let q
be an arbitrary state of Q. Since Q =⇒ I and I ⊆ Dπ thus q ∈ Dπ .
    The execution starting from q can be splitted into the sections of the executions generated by the body
S0 . Each section is started at a state satisfying I ∧ π and terminates at a state satisfying I (see the cond.
{{I ∧ π}}S0 {{I}}). If the total execution is finite, its last section finishes at a state satisfying ¬π or the state q
own satisfies ¬π if the loop stops at once. It means that each finite execution strating from q finishes at a state
of R since I ∧ ¬π =⇒ R.
    The proof will be complete if we show that there is no infinite execution from q. If there would be an infinite
execution, it should consist of infinite sections. Let us consider the infinite sequence of integers that is mapped
from the beginning states of the sections by the function t. Because of the criterion ∀c0 ∈ Z : {{I ∧ π ∧ t =
c0 }} S0 {{t < c0 }} this sequence should be strictly monotone decreasing thus it should contain negative numbers.
However all numbers must be nonnegative because of the criterion I ∧ π =⇒ t ≥ 0. This is a contradiction.
Theorem 8. Let Q and R be two assertions, and S0 be a program and β be a condition.
                                                              Q ⊆ Dβ
                                                     {{Q ∧ β}} S0 {{R}}
                                               {{Q}} await β then S0 ta {{R}}
Proof. It is enough to show that the executions of the conditional statement starting from Q finish at R. Let
q be an arbitrary state of Q. If q ∈ Dβ and q satisfies β, the executions of the await-statement starting from q
are identical to the executions of the atomic region S0 starting from q. These executions finish at a state in R
because of {{Q ∧ β}} S0 {{R}}.
     The last rule is about the parallel composition.
Theorem 9. Let Q, Q1 , . . . Qn and R, R1 , . . . Rn be assertions, S1 , . . . Sn be programs, and S1∗ , . . . Sn∗ be
the annotations of the corresponded programs.
                                                   Q =⇒ Q1 ∧ . . . ∧ Qn
                                               ∀i ∈ {1, . . . , n} : {{Qi }}Si∗ {{Ri }}
                                                 and they are interference free
                                             R1 ∧ . . . ∧ Rn =⇒ R
                                     {{Q}} parbegin S1 k · · · k Sn parend {{R}}
where standard proof outlines {{Qi }}Si∗ {{Ri }}, i ∈ {1, . . . , n}, are called interference free if no normal as-
signment or atomic region u of a component program Si interferes with the proof outline {{Qj }}Sj∗ {{Rj }} of
another component program Sj where i 6= j. We say that u does not interfere with {{Q}}S ∗ {{R}} if the following
conditions are satisfied:
    1. for all assertions r in {{Q}}S ∗ {{R}} the formula {{r ∧ pre(u)}}u{{r}} holds, where pre(u) is the precon-
       dition of u in the annotation S ∗ ,
    2. for all termination function t : Ā → Z in {{Q}}S ∗ {{R}} where A is the base state space of the parallel
       composition the formula {{pre(u) ∧ t = c0 }} u {{t ≤ c0 }} holds, where c0 is an arbitrary integer.

4      Case Study
Consider the following problem: given an array x of n integer numbers and an integer number k. Count the
elements of x that are divisors of k.
Specification of the problem can be given in the following way:
A = (x : Zn , k : Z, count : N)
P re = (x = x0 )
                          n
                          P
P ost = (P re ∧ count =         χ(x[j] | k))
                          j=1
where χ : L → {0, 1} and χ(true) = 1 and χ(f alse) = 0

     Let S denote the following program:




                                                                 91
i, count := 1, 0
while i ≤ n do
    if
         x[i] | k → count := count + 1   
         x[i] - k → skip
    fi ;
     i := i +1
od

Let Q0 denote the intermediate assertion of the sequence S, between the initialisation and the loop, Q00 the
intermediate assertion that holds before executing the assignment i := i + 1, Inv the invariant and t the variant
function of the loop.

   Now we shall to prove that {{P re}} S {{P ost}} holds. Since S is a sequence, due to Theorem 4. it is sufficient
to prove that
 1. {{P re}} i, count := 1, 0 {{Q0 }} and
 2. {{Q0 }} DO {{P ost}}, where DO denotes the loop:
    while i ≤ n do if x[i] | k → count := count + 1  x[i] - k → skip fi; i:=i+1 od
    and Q0 = (P re ∧ count = 0 ∧ i = 1) is given.


   Let us prove the two conditions:
 1. {{P re}} i, count := 1, 0 {{Q0 }}
    By replacing i with 1 and count with 0 in Q0 we obtain (P re ∧ 0 = 0 ∧ 1 = 1), that is P re. Obviously
    P re =⇒ P re holds. We proved that P re =⇒ Q0i←1,count←0 holds. Now, by the remark of Theorem 2.
    {{P re}} i, count := 1, 0 {{Q0 }} is deduced.
 2. {{Q0 }} DO {{P ost}}
    Instead of proving this verification condition, due to Theorem 7. it is sufficient to prove that
      (a) Q0 =⇒ Inv and
     (b) Inv ⊆ Di≤n and
      (c) Inv ∧ ¬(i ≤ n) =⇒ P ost and
     (d) Inv ∧ i ≤ n =⇒ n − i ≥ 0 and
      (e) ∀c0 ∈ Z : {{Inv ∧ i ≤ n ∧ n − i = c0 }} S0 {{Inv ∧ n − i < c0 }}
     where S0 denotes the loop body if x[i] | k → count := count + 1 x[i] - k → skip fi; i:=i+1.
     and the loop invariant Inv and the variant function t are given as follows:
                                            i−1
                                            P
     Inv = (P re ∧ i ∈ [1..n + 1] ∧ count =     χ(x[j] | k))
                                             j=1
     t=n−i
     Let us prove the conditions separately:
      (a) Q0 =⇒ Inv
          We have to prove that Q0 implies
             i. P re
                This holds since Q0 contains P re.
            ii. i ∈ [1..n + 1]
                Since i = 1, i is an element of the not empty set [1..n + 1]. The interval [1..n + 1] is not empty,
                because n, that is the length of the array x, is zero or a positive integer number.
                          i−1
                          P
           iii. count =       χ(x[j] | k)
                        j=1
               Due to condition Q0 , count = 0 and i = 1. Thus the sum is empty and its value is also 0.




                                                        92
(b) Inv ⊆ Di≤n
    Since i ≤ n is a well-defined logical function, its domain contains all states of the statespace, including
    those that satisfy Inv.
(c) Inv ∧ ¬(i ≤ n) =⇒ P ost
      i. P re
         The invariant contains the precondition, therefore Inv implies P re.
                 Pn
     ii. count =     χ(x[j] | k)
                 j=1
        Since i ∈ [1..n + 1] and ¬(i ≤ n), therefore we get i = n + 1.
                                      i−1
                                      P
        This, together with count =       χ(j | k)) we know from Inv, yields the desired condition.
                                       j=1

(d) Inv ∧ i ≤ n =⇒ n − i ≥ 0
    This holds because due to the loop condition i ≤ n is true.
(e) ∀c0 ∈ Z : {{Inv ∧ i ≤ n ∧ n − i = c0 }} S0 {{Inv ∧ n − i < c0 }}
    Let c0 be an arbitrary integer number. Since the loop body S0 is a sequence, we use Theorem 4. with
    Inv ∧ i ≤ n ∧ n − i = c0 as Q and with Inv ∧ n − i < c0 as R. It is sufficient to prove the following two
    conditions:
      i. {{Inv ∧ i ≤ n ∧ n − i = c0 }} IF {{Q00 }} and
     ii. {{Q00 }} i := i + 1 {{P ∧ n − i < c0 }}

    where IF denotes the conditional statement if x[i] | k → count := count + 1 x[i] - k → skip fi
    and the intermediate assertion of the loop body is Q00 is given: Q00 = Inv i←i+1 ∧ n − i = c0
      i. {{Inv ∧ i ≤ n ∧ n − i = c0 }} IF {{Q00 }}
         Due to Theorem 6., the following conditions are sufficient to prove:
         A. Inv ∧ i ≤ n ∧ n − i = c0 =⇒ (x[i] | k ∨ x[i] - k) and
         B. Inv ∧ i ≤ n ∧ n − i = c0 ⊆ Dx[i]|k ∩ Dx[i]-k and
         C. {{Inv ∧ i ≤ n ∧ n − i = c0 ∧ x[i] | k ∧ n − i = c0 }} count := count + 1 {{Q00 }} and
         D. {{Inv ∧ i ≤ n ∧ n − i = c0 ∧ x[i] - k ∧ n − i = c0 }} skip {{Q00 }}
         Let us prove the conditions separately:
         A. Inv ∧ i ≤ n ∧ n − i = c0 =⇒ (x[i] | k ∨ x[i] - k)
             For each state of the statespace for which Inv ∧ i ≤ n ∧ n − i = c0 holds, either x[i] is a divisor
             of k or x[i] is not a divisor of k.
         B. Inv ∧ i ≤ n ∧ n − i = c0 ⊆ Dx[i]|k ∩ Dx[i]-k
             In order to ensure that x[i] | k and x[i] - k are well-defined functions, we have to take into
             account not only that the divisibility x[i] | k can be answered only if x[i] is not zero, but the
             index i has to be inside the bounds of the array x. More precisely, we want to prove that
             Inv ∧ i ≤ n ∧ n − i = c0 =⇒ i ∈ [1..n] ∧ x[i] 6= 0 ∧ i ∈ [1..n] ∧ x[i] 6= 0.
             Although i ∈ [1..n + 1] (due to the invariant) and the loop condition i ≤ n together allow us to
             deduce that i ∈ [1..n] holds, we cannot guarantee that each state of the statespace for which
             Inv ∧ i ≤ n ∧ n − i = c0 holds, x[i] is not 0. The reason is, that there is no assumption for the
             elements of x, except that they are integer numbers. The case when x[i] = 0, is not excluded
             by any condition we know and are allowed to use. If x[i] equals 0, the expressions x[i] | k
             and x[i] - k have no defined value. The current condition cannot be proven. We provide the
             remaining part of the proof for the sake of completeness.
         C. {{Inv ∧ i ≤ n ∧ n − i = c0 ∧ x[i] | k ∧ n − i = c0 }} count := count + 1 {{Q00 }}
             Let us recall that Q00 is (Inv i←i+1 ∧ n − i = c0 ).
                                                                            i−1
             Q00count←count+1 = (P re ∧ i + 1 ∈ [1..n + 1] ∧ count + 1 =
                                                                             P
                                                                                χ(x[j] | k)) + χ(x[i] | k)). By
                                                                            j=1
            Theorem 2. it is sufficient to prove that
            (Inv ∧ i ≤ n ∧ n − i = c0 ∧ x[i] | k ∧ n − i = c0 ) =⇒ Q00count←count+1 .




                                                   93
                • P re
                  P re in included in Inv.
                • i + 1 ∈ [1..n + 1]
                  Due to the invariant i ∈ [1..n + 1] holds. This, combined with the loop condition i ≤ n implies
                  i + 1 ∈ [1..n + 1]. i = 1 and i ≤ n.
                                i−1
                                P
                • count + 1 =       χ(x[j] | k)) + χ(x[i] | k))
                                  j=1
                                                         i−1
                                                         P
                   By the loop invariant Inv, count =          χ(x[j] | k)). Since in this case x[i] is a divisor of k,
                                                         j=1
                  χ(x[i] | k) = 1, we added 1 to both sides of the previous equation.
              D. {{Inv ∧ i ≤ n ∧ n − i = c0 ∧ x[i] - k ∧ n − i = c0 }} skip {{Q00 }}
                 Let us recall that Q00 is (Inv i←i+1 ∧ n − i = c0 ) that is (P re ∧ i + 1 ∈ [1..n + 1] ∧ count =
                 i−1
                 P
                     χ(x[j] | k)) + χ(x[i] | k) ∧ n − i = c0 ). By Theorem 1. it is sufficient to prove that
                  j=1
                 (Inv ∧ i ≤ n ∧ n − i = c0 ∧ x[i] - k ∧ n − i = c0 ) =⇒ Q00 .
                • P re
                  P re in included in Inv.
                • i + 1 ∈ [1..n + 1]
                  We prove this in the same way as we did in the previous case.
                            i−1
                            P
                • count =       χ(x[j] | k)) + χ(x[i] | k))
                            j=1
                                                        i−1
                                                        P
                   By the loop invariant Inv, count =         χ(x[j] | k)). Since in this case x[j] is not a divisor of k,
                                                        j=1
                   χ(x[i] | k) evaluates to zero. The desired condition holds because both sides of the equation
                             i−1
                             P
                   count =       χ(x[j] | k)) in Inv remained the same.
                            j=1
                  • n − i = c0
                     It is obviously true, since it is on the left side.
                   00
           ii. {{Q }} i := i + 1 {{Inv ∧ n − i < c0 }}
               Q00 = (Inv i←i+1 ∧n−i = c0 . It is obvious that (Inv i←i+1 ∧n−i = c0 ) =⇒ (Inv ∧n−i < c0 )i←i+1 ,
               therefore by Theorem 2 we get the expected correctness formula.

To prove the correctness formula {{Q}} S {{R}}, all the verification conditions generated by the verification
rules have to be satisfied. Let us remember that the following condition was not proven:
Inv ∧ i ≤ n ∧ n − i = c0 ⊆ Dx[i]|k ∩ Dx[i]-k
We could not guarantee that both of the logical functions x[i] | k and x[i] - k are well-defined functions.
Evaluating these functions of the program might lead to abortion. The rest of the conditions were unnecessary
to prove, their proof was given for the sake of completeness.

5   Summarization
The main idea behind this work is to take into account abortion caused by partial functions in programs and
extend verification rules to be able to ensure that such programs are total correct. To reason about correctness,
we provided the formal definition of the semantics of programs under our investigation. One of the contributions
of this paper is, that the semantics of program constructs are defined in such a way that they suit an existing
relational model of programming. This relational model defines the program as a set of its possible executions
and also provides definition for other important programming notions like problem and solution. Then, for each
class of statements we provide a verification rule. The first six rules are well-known. Three rules are new, they
are presented along with their proofs. The use of the rules is demonstrated by means of a verification case study.

References
[Apt97] K. R. Apt, E.-R. Olderog. Verification of Sequential and Concurrent Program. Springer-Verlag, 1997.




                                                        94
[Dij76]   E. W. Dijkstra. A Discipline of Programming. Prentice-Hall, Englewood Cliffs, New York, 1976.
[Fot88]   Á. Fóthi. Mathematical Approach to Programming. Ann. Univ. Sci. Budapest. Sect. Comput. 9, 105-114.
          1988.
[Fot95]   Á. Fóthi et al. Some concepts of a Relational Model of Programming. Varga L., ed., Proc. 4th Sympo-
          sium on Programming Language and Software Tools, 434-446, Visegrád, Hungary, June 8-14, 1995.
[Fot05]   Á. Fóthi. Bevezetés a programozáshoz. ELTE Eötvös Kiadó. 2005. (in Hungarian).

[Gre12] T. Gregorics. Concept of abstract program. Acta Universitatis Sapientiae, Informatica, 4, 1, 7-16. 2012.
[Gri81]   D. Gries. The Science of Programming. Springer, Berlin, 1981.
[Hoa69] C. A. Hoare. An axiomatic basis for computer programming. Comm. ACM 12, pp. 576-580, 1969.
[Man74] Z. Manna. Mathematical theory of computation. McGraw Hill, 1974.

[Owi76] S. Owicki, D. Gries. An axiomatic proof technique for parallel programs. Acta Inf., 6, pp. 319-340, 1976.
[Roe01] W.-P. de Roever et al. Concurrency Verification. Cambridge University Press, 2001.




                                                        95