A Call-by-Need Lambda Calculus with Scoped Work Decorations David Sabel1 and Manfred Schmidt-Schauß2 Abstract: The polymorphically typed functional core language LRP is a lambda calculus with recur- sive let-expressions, data constructors, case-expressions, and a seq-operator that uses call-by-need evaluation. In this work LRP is extended by scoped work decorations to ease computations when rea- soning on program improvements. The considered language LRPw extends LRP by two constructs to represent work (i.e. numbers of reduction steps) that can be shared between several subexpressions. Due to a surprising observation that this extension is proper, some effort is required to re-establish the correctness and optimization properties of a set of program transformations also in LRPw. Based on these results, correctness of several useful computation rules for work decorations is shown. 1 Introduction Motivation. This paper is motivated by our recent investigations on program transfor- mations and on the question whether these transformations are optimizations w.r.t. the runtime behavior: We analyzed such optimizations in core languages of lazy functional programming languages: The calculus LR [SSSS08] is an untyped call-by-need lambda calculus extended by data-constructors, case-expressions, seq-expressions, and letrec- expressions. This calculus e.g. models the (untyped) core language of Haskell. The calcu- lus LRP is the polymorphically typed variant of LR [SSS15a], where typing in LRP is by let-polymorphism [Pi02, Pi00, VPJ13]. Polymorphism is made explicit in the syntax and there are also reduction rules for computing the specific types of functions. In [SSS15b, SSS15a] a notion of improvement for program transformations was defined and analyzed for LR and LRP. Several results concerning different length measures and proving the improvement property for concrete transformation rules, like common subex- pression elimination, were established. In [SSS15c] we considered improvements in LRP with a special focus on list-processing functions. To enable those proofs, a notion of shared work between several subexpression is very helpful, since it supports modular reasoning. While in [SSS15c] these work decorations were added in a sound but somehow ad-hoc manner, in this paper we provide a formal treatment and add a scoping for decorations. Correct Program Transformations and Improvements. For reasoning on the correctness of program transformations, we use contextual equivalence as program equivalence: Contex- tual equivalence identifies two programs if exchanging one program by the other program Copyright c 2016 for the individual papers by the papers’ authors. Copying permitted for private and aca- demic purposes. This volume is published and copyrighted by its editors. 1 Computer Science Institute, Goethe-University Frankfurt am Main, sabel@ki.informatik.uni-frankfurt.de 2 Computer Science Institute, Goethe-University Frankfurt am Main, schauss@ki.informatik.uni-frankfurt.de 70 in any surrounding larger program (the context) is not observable. Due to the quantification over all contexts it is sufficient to only observe the termination behavior of the programs, since e.g. different values like True and False can be distinguished by plugging them into a context C s.t. C[True] terminates while C[False] diverges. A program transformation is correct if it preserves the semantics, i.e. it preserves contextual equivalence. For reasoning whether program transformations are also optimizations, i.e. so-called im- provements, we adopt the improvement theory originally invented by Moran and Sands [MS99] for an abstract machine semantics, but slightly modified and adapted it in [SSS15b, SSS15a] for the calculi LR and LRP. Roughly speaking, a program transformation is an improvement, if it is correct (i.e. preserves contextual equivalence) and the number of computation steps is never strictly increased after applying the transformation. Scoped work decorations. In [MS99] a tick-algebra was introduced to prove correctness of improvement laws in a modular way. A tick Xn can be attached to an expression to add a fixed amount of work to the expression (i.e. n execution steps). Several laws for comput- ing with ticks are formulated and proved correct. In this paper we introduce the calculus LRPw which extends LRP in a similar way, where ticks are called decorations, but they are extended to a formalism that can express work which is shared between several subex- pressions, which makes reasoning more comfortable and also more exact. In LRPw there are two new (compared to LRP) constructs: Bindings of the form a:=n and decorations of the form s[a] . Here s[a] means that the work expressed by the binding for a (i.e. n essen- tial steps) has to be done before the expression s can be further evaluated. If decoration a occurs at several subexpressions, the work is shared between the subexpressions (and thus at most performed once). The bindings a:=n occur in usual letrec-expressions and thus also define the scope of the sharing, and a notion of α-equivalence w.r.t. the labels a. This makes a rigorous formal treatment possible. As a very simple example for the usefulness of shared-work decoration, consider the ex- pression s = (let x=(λ w.w) True in (x, x)) and assume that we want to transform s into a value with work decorations (e.g. for further reasoning): The binding x is only evaluated if the first, the second, or both components of the pair are demanded by an outer context. Since we use call-by-need evaluation, in any of the three cases the binding for x is evaluated only once. This can be expressed by work decorations with let a:=1 in (True[a] , True[a] ). Results. A surprising and counter-intuitive observation is that the extension LRPw cannot be encoded in LRP (see Proposition 4.8), and that the non-encodable expressions cannot be excluded, since these arise naturally when computing with the work decorations. This makes it necessary to explicitly consider the calculus LRPw and to reconsider claims and proofs of properties. Thus in this paper we show that known improvement laws for LRP also hold in LRPw, that a context lemma for improvement holds in LRPw and that several computation rules which simplify the reasoning with decorated expressions are invariant w.r.t. the improvement relation. The results of this paper allow to use shared work decora- tions as a reasoning tool, e.g. for proving improvement laws on list-processing expressions and functions (as in [SSS15c], but now with a formal semantic foundation). 71 Types: Types Typ and polymorphic types PTyp are generated by the grammar: τ ∈ Typ ::= A | (τ1 → τ2 ) | K τ1 . . . τar(K) ρ ∈ PTyp ::= τ | λ A.ρ Expressions: Expressions ExprF , patterns patK,i , bindings Bindi , and polymorphic abstractions PExprF are generated by the following grammar: s,t ∈ ExprF ::= u | x :: ρ | (s τ) | (s t) | (letrec Bind1 , . . . , Bindn in t) | (s[a] ) | (seq s t) | (cK,i :: τ s1 . . . sar(cK,i ) ) | (caseK s (patK,1 _ t1 ) . . . (patK,|DK | _ t|DK | )) patK,i ::= (cK,i :: τ x1 :: τ1 . . . xar(cK,i ) :: τar(cK,i ) ) Bindi ::= x :: ρ=s | a:=n where a is a label, and n ∈ N0 u ∈ PExprF ::= ΛA1 . . . . .ΛAk .λ x :: τ.s Typing rules: u :: ρ s :: τ1 pati :: τ1 ti :: τ2 s :: τ t :: τ 0 s :: λ A.ρ ΛA.u :: λ A.ρ (caseK s (pat1 _ t1 ) . . . (pat|DK | _ t|DK ] )) :: τ2 (seq s t) :: τ 0 (s τ) :: ρ[τ/A] s :: τ1 → τ2 t :: τ1 s1 :: ρ1 ... sn :: ρn t :: ρ (s t) :: τ2 (letrec a1 :=n1 , . . . , am :=nm , x1 :: ρ1 = s1 , . . . , xn :: ρn = sn in t) :: ρ s1 :: τ1 , . . . , sar(c) :: τar(c) τ = τ1 → . . . → τar(c) → τar(c)+1 s :: τ2 type(c) = λ A1 , . . . , Am .τ 00 ∃τ10 , . . . , τm0 : τ 00 [τ10 /A1 , . . . , τm0 /Am ] = τ (λ x :: τ1 .s) :: τ1 → τ2 (c :: τ s1 . . . sar(c) ) :: τar(c)+1 Fig. 1: Syntax of expressions and types, and typing rules, where A, Ai ∈ TVar denote type variables, x, xi ∈ Var denote term variables, and a, b, ai , bi are labels used for sharing work. Outline. In Sect. 2 we introduce the calculi LRP and LRPw, and transfer the basic defini- tions and context lemmas from LRP to LRPw. In Sect. 3 we show that several reduction rules and program transformations are correct and are improvements. In Sect. 4 we show that LRP and LRPw are isomorphic w.r.t. contextual equivalence, and we investigate the relation between both calculi w.r.t. the improvement relation. In Sect. 5 we show that sev- eral computation rules for work decorations hold. We conclude in Sect. 6. 2 The Polymorphically Typed Lazy Lambda Calculus LRPw We introduce the calculus LRPw. It is an extension of the polymorphically typed, extended call-by-need lambda calculus LRP [SSS15a, SS14] and its untyped variant LR [SSSS08]. 2.1 Syntax and Operational Semantics of LRPw Let K be a fixed set of type constructors, s.t. every K ∈ K has an arity ar(K) ≥ 0 and an associated finite, non-empty set DK of data constructors, s.t. every cK,i ∈ DK has an arity ar(cK,i ) ≥ 0. We assume that K includes type constructors for lists, pairs and Booleans to- gether with the data constructors Nil and Cons, Pair, and the constants True and False. The syntax of expressions and types of LRPw is defined in Fig. 1, where we assume that variables have a fixed type, written as x :: ρ. The calculus LRPw extends the lambda- 72 calculus by recursive let-expressions, data constructors, case-expressions (for every type constructor K), seq-expressions and by type abstractions ΛA.s and type applications (s τ) in order to express polymorphic functions and type instantiation and by shared work- decorations a:=n and [a] . A letrec-binding a:=n means that a work load of n essen- tial reduction steps is associated with label a where the shared position is the top of the letrec-expression, the construct s[a] means that before expression s can be evaluated the work associated with label a has to be evaluated. Polymorphically typed variables are only permitted for usual bindings of let-environments; at other places, the language is monomorphic where the concrete types can be computed through type reductions. For example, the identity can be written as ΛA.λ x :: A.x, and an application to the constant True is written (ΛA.λ x :: A.x) Bool True. The reduction is (ΛA.λ x :: A.x) Bool True → (λ x :: Bool.x) True → (letrec x=True in x). An expres- sion s is well-typed with type τ (polymorphic type ρ, resp.), written as s :: τ (or s :: ρ, resp.), if s can be typed with the typing rules in Fig. 1 with type τ (ρ, resp.). The calculus LR [SSSS08] is the untyped variant of LRPw (without work-decorations), where types and type-reduction are removed. In the following we often ignore the types and omit the types at variables and also sometimes omit the type reductions. We use some abbreviations: We write λ x1 , . . . , xn .s instead of λ x1 . . . . .λ xn .s. Parts of a letrec- environment are abbreviated by Env, and with {xg(i) =s f (i) }m i= j we abbreviate the bind- ings xg( j) =s f ( j) , . . . , xg(m) =sg(m) . Alternatives of case-expressions are abbreviated by alts. Constructor applications (cK,i s1 . . . sar(cK,i ) ) are abbreviated using vector notation, omit- ting the index as c→ −s . We use FV(s) and BV(s) to denote the free and bound variables of an expression s, and FN(s) and BN(s) to denote the free and bound label-names of an expression s. An expression s is closed iff FV(s) = 0/ and FN(s) = 0. / In an environment Env = {xi =ti }ni=1 , we define LV(Env) = {x1 , . . . , xn }. A value is an abstraction λ x.s, a type abstraction ΛA1 . . . . ΛAn .λ x.s, or a constructor appli- cation c→ −s . A context C is an expression with one hole [·] at expression position. The reduction rules of the calculus are in Fig. 3, where we use the following unions: (case) is the union of (case-c), (case-in), (case-e); (seq) is the union of (seq-c), (seq-in), (seq-e); (cp) is the union of (cp-in), (cp-e); (llet) is the union of (llet-in), (llet-e); (lll) is the union of (lapp),(lcase),(lseq),(llet-in),(llet-e); (letwn) is the union of (letwn-in), (letwn-e); (letw0) is the union of (letw0-in), (letw0-e); (letw) is the union of (letwn), (letw0). The operational semantics of LRPw is defined by the normal order reduction strategy which is a call-by-need strategy. The labeling algorithm shown in Fig. 2 is used to detect the position to which a reduction rule is applied according to normal order. It uses the labels top, sub, vis, and nontarg where top means reduction of the top term, sub means reduction of a subterm, vis marks already visited subexpressions, and nontarg marks already visited variables that are not target of a (cp)-reduction. Note that the labeling algorithm does not descend into sub-labeled letrec-expressions. The rules of the labeling algorithm are in Fig. 3. If the labeling algorithm terminates without a fail, then a potential normal order redex is found, which can only be the direct superterm of the sub-marked subexpression. 73 Labeling algorithm: Labeling of s starts with stop . The rules from below are applied exhaustively or until a fail occurs, where a ∨ b means label a or label b. (s t)sub∨top → (ssub t)vis (letrec Env in s) top → (letrec Env in ssub )vis letrec x=s, Env in C[x ] sub → letrec x=ssub , Env in C[xvis ] sub letrec x=s, y=C[x ], Env in t → letrec x=ssub , y=C[xvis ], Env in t, if C6=[·] sub letrec x=s, y=x , Env in t → letrec x=ssub , y=xnontarg , Env in t (seq s t) sub∨top → (seq ssub t)vis (caseK s alts) sub∨top → (caseK ssub alts)vis letrec x=s vis∨nontarg sub , y=C[x ] . . .→ Fail letrec x=C[xsub ], Env in t → Fail Fig. 2: Labeling algorithm The labelings in the expressions in the reduction rules shown in Fig. 3 indicate the exact place and positions of the expressions and subexpressions involved in the reduction step. However, it may also happen that there is no normal order reduction, since either the evaluation is already finished, or a black hole is detected, or the labeling fails. LRPw Definition 2.1. For an expression t, a normal order reduction step t −−−→ t 0 is defined by first applying the labeling algorithm to t, and if the labeling algorithm terminates without a fail, then one of the rules in Fig. 3 has to be applied resulting in t 0 , if possible, where the labels sub, vis must match the labels in the expression t. A weak head normal form (WHNF) is a value v, or an expression letrec Env in v, where v is a value, or an expression → − letrec x1 =c t , {xi =xi−1 }m i=2 , Env in xm . An expression s converges, denoted as s↓LRPw , LRPw,∗ iff there exists a normal-order reduction s −−−−→ s0 , where s0 is a WHNF. We write s↑LRPw iff s↓LRPw does not hold. With ⊥ we denote a diverging, closed expression. Note that there are diverging expressions of any type, for example letrec x :: ρ=x in x. Definition 2.2. The calculus LRP is the subcalculus of LRPw which does not have the syntactic constructs a:=n and s[a] , and the operational semantics of LRP does not have the reduction rules (letwn) and (letw0). WHNFs are defined as in LRPw. Convergence ↓LRP is defined accordingly. Lemma 2.3. For every LRPw-expression s which is also an LRP-expression (i.e. s has no decorations and no a:=n-construct): s ↓LRPw ⇐⇒ s ↓LRP . Definition 2.4. A reduction context R is any context, such that its hole will be labeled with sub or top by the labeling algorithm in Fig. 1. A weak reduction context, R− , is a reduction context, where the hole is not within a letrec-expression. Surface contexts S are contexts where the hole is not in an abstraction, top contexts T are surface contexts where the hole is not in an alternative of a case, and weak top contexts are top contexts where the hole does not occur in a letrec. A context C is strict iff C[⊥] ∼c ⊥. 74 (lbeta) C[((λ x.s)sub r)] → C[letrec x=r in s] (Tbeta) ((ΛA.u)sub τ) → u[τ/A] (cp-in) letrec x1 =vsub , {xi =xi−1 }m vis i=2 , Env in C[xm ] m → letrec x1 =v, {xi =xi−1 }i=2 , Env in C[v], where v is a polymorphic abstraction (cp-e) letrec x1 =vsub , {xi =xi−1 }m vis i=2 , Env, y=C[xm ] in r → letrec x1 =v, {xi =xi−1 }m i=2 , Env, y=C[v] in r, where v is a polymorphic abstraction (llet-in) (letrec Env1 in (letrec Env2 in r)sub ) → (letrec Env1 , Env2 in r) (llet-e) letrec Env1 , x=(letrec Env2 in t)sub in r → letrec Env1 , Env2 , x=t in r (lapp) C[((letrec Env in t)sub s)] → C[(letrec Env in (t s))] (lcase) C[caseK (letrec Env in t)sub alts] → C[(letrec Env in caseK t alts)] (lseq) C[(seq (letrec Env in s)sub t)] → C[(letrec Env in (seq s t))] (seq-c) C[(seq vsub t)] → C[t], if v is a value (seq-in) (letrec x1 =(c→ −s )sub , {x =x }m , Env in C[(seq xvis t)]) i i−1 i=2 m → (letrec x1 =(c→ −s ), {x =x }m , Env in C[t]) i i−1 i=2 (seq-e) (letrec x1 =(c→ −s )sub , {x =x }m , Env, y=C[(seq xvis t)] in r) i i−1 i=2 m → (letrec x1 =(c→ −s ), {x =x }m , Env, y=C[t] in r) i i−1 i=2 → − (case-c) C[caseK (c t )sub . . . ((c→ − ar(c) y ) _ t) . . . ] → C[letrec {yi =ti }i=1 in t] if ar(c) ≥ 1 sub (case-c) C[caseK c . . . (c _ t) . . . ] → C[t], if ar(c) = 0 → − → − (case-in) letrec x1 =(c t )sub , {xi =xi−1 }m vis i=2 , Env in C[caseK xm . . . ((c z ) _ t) . . . ] → − → letrec x =(c y ), {y =t } ar(c) , {x =x }m , Env in C[letrec {z =y } ar(c) in t], 1 i i i=1 i i−1 i=2 i i i=1 if ar(c) ≥ 1 and where yi are fresh (case-in) letrec x1 =csub , {xi =xi−1 }m vis i=2 , Env in C[caseK xm . . . (c _ t) . . . ] → letrec x1 =c, {xi =xi−1 }m i=2 , Env in C[t] if ar(c) = 0 → − vis . . . ((c→ −z ) _ r) . . . ], Env in s (case-e) letrec x1 =(c t )sub , {xi =xi−1 }m i=2 , u=C[case x K m → − → letrec x1 =(c y ), {yi =ti }ni=1 , {xi =xi−1 }m n i=2 , u=C[letrec {zi =yi }i=1 in r], Env in s where n = ar(c) ≥ 1 and yi are fresh (case-e) letrec x1 =csub , {xi =xi−1 }m vis i=2 , u=C[caseK xm . . . (c _ t) . . .], Env in s → letrec x1 =c, {xi =xi−1 }m i=2 , u=C[t], Env in s, if ar(c) = 0 (letwn-in) letrec Env, a:=n, in C[(s[a] )sub ] → letrec Env, a:=n−1 in C[s[a] ], if n > 0 (letwn-e) letrec a:=n, x=C[(s[a] )sub ], Env in r → letrec a:=n−1, x=C[s[a] ], Env in r, if n > 0 (letw0-in) letrec Env, a:=0, in C[(s[a] )sub ] → letrec Env, a:=0 in C[s] (letw0-e) letrec a:=0, x=C[(s[a] )sub ], Env in r → letrec a:=0, x=C[s], Env in r Fig. 3: Reduction rules 2.2 Contextual Equivalence and Improvement in LRP and LRPw The main measure for estimating the time consumption of computation in this paper is a measure counting essential reduction steps in the normal-order reduction of expres- sions. We omit the type reductions in this measure, since these are always terminating and usually can be omitted after compilation. We define the essential reduction length for both calculi, where we allow some freedom in which reduction rules (as a subset of {lbeta, case, seq, letwn}) should be seen as essential. Clearly, we require that letwn- reductions are always counted (since they represent essential work). We also require that 75 (lbeta)-reductions are always counted, since there are expressions which have no (case)- or (seq)-reductions but an unbounded number of (lbeta)-reductions (for a detailed discussion and analysis on the length measures, see [SSS15a]). Definition 2.5. Let A = {lbeta, case, seq, letwn}, Amin = {letwn, lbeta}, Amin ⊆ A ⊆ A, L ∈ {LRP, LRPw}, and let t be a closed L-expression with t↓Lt0 . Then rlnA (t) is the number of a-reductions in the normal order reduction t↓Lt0 where a ∈ A. We define the L,∗ L,∗ measures as ∞, if t↑L . For a reduction t −−→ t 0 , we define rln(t −−→ t 0 ) as the number of (lbeta)-, (case)-, (seq)-, and (in LRPw) (letwn)-reductions in it. We define contextual equivalence and the improvement relation for LRPw and LRP: Definition 2.6. For L ∈ {LRP, LRPw}, let s,t be two L-expressions of the same type ρ and let Amin ⊆ A ⊆ A. We define the contextual preorder ≤c,L , the contextual equivalence ∼c,L , and the A-improvement relation A,L : s ≤c,L t iff for all L-contexts C[· :: ρ]: C[s]↓L =⇒ C[t]↓L s ∼c,L t iff for all L-contexts C[· :: ρ]: C[s]↓L ⇐⇒ C[t]↓L s A,L t iff s ∼c,L t and for all L-contexts C[· :: ρ] s.t. C[s],C[t] are closed: rlnA (C[s]) ≤ rlnA (C[t]) If s A,L t then we say s A-improves t. If s A,L t and t A,L s, then we write s ≈A,L t. A program transformation P is correct if P ⊆ ∼c,L and it is an A-improvement iff P ⊆ A,L . The following context lemma for ∼c holds in LRP and also in LRPw. The proof is stan- dard, so we omit it. Lemma 2.7 (Context Lemma for Equivalence). Let L ∈ {LRP, LRPw} and s,t be L- expressions of the same type. Then s ≤c t iff for all C ∈ {R, S, T }: C[s]↓L =⇒ C[t]↓L . Let η ∈ {≤, =, ≥} be a relation on non-negative integers, X be a class of contexts X (we will instantiate X with: all contexts C; all reduction contexts R; all surface contexts S; or all top-contexts T ), and let Amin ⊆ A ⊆ A. For expressions s,t of type ρ, let s ./A,η,X t iff for all X-contexts X[· : ρ], s.t. X[s], X[t] are closed: rlnA (X[s]) η rlnA (X[t]). In particular, ./A,≤,C = A , ./A,≥,C = A , and ./A,=,C = ≈A . In the following we formulate statements for the calculus LRPw, if not stated otherwise. The context lemma for improvement shows that it suffices to take reduction contexts into account for proving improvement. Its proof is similar to the ones for context lemmas for contextual equivalence in call-by-need lambda calculi (see [SSS15b, SSSS08, SSS10]). The proof is nearly a complete copy of the proof of the context lemma for improvement in LRP (see [SSS15b]). We omit it, but it can be found in the technical report [SSS15d]. Lemma 2.8 (Context Lemma for Improvement). Let s,t be expressions with s ∼c t, η ∈ {≤, =, ≥}, and let X ∈ {R, S, T }. Then s ./A,η,X t iff s ./A,η,C t. 76 (gc1) letrec {xi =si }ni=1 , Env in t → letrec Env in t, if ∀i : xi 6∈ FV(t, Env) (gc2) letrec x1 =s1 , . . . , xn =sn in t → t, if for all i : xi 6∈ FV(t) (gcW1) letrec Env, a1 :=n1 , . . . , am :=nm in s → letrec Env in s, if labels a1 , . . . , am do not occur in Env or s (gcW2) letrec a1 :=n1 , . . . , am :=nm in s → s, if a1 , . . . , am do not occur in s (cpx-in) letrec x=y, Env in C[x] → letrec x=y, Env in C[y], if y ∈Var, x 6= y (cpx-e) letrec x=y, z=C[x], Env in t → letrec x=y, z=C[y], Env in t, if y ∈Var, x 6= y → − (cpcx-in) letrec x=c t , Env in C[x] → letrec x=c→ −y , {yi =ti }ni=1 , Env in C[c→ − y] → − → − → − (cpcx-e) letrec x=c t , z=C[x], Env in t → letrec x=c y , {yi =ti }ni=1 , z=C[c y ], Env in t → − letrec x=c t , Env in s → letrec x=c→ − ar(c) (abs) x , {yi =ti }i=1 , Env in s → − ar(c) → − (abse) (c t ) → letrec {yi =ti }i=1 in c x (xch) letrec x=t, y=x, Env in r → letrec y=t, x=y, Env in r (lwas) T [letrec Env in t] → letrec Env in T [t], if T is a weak top context of hole depth 1 (ucp1) letrec Env, x=t in S[x] → letrec Env in S[t] (ucp2) letrec Env, x=t, y=S[x] in r → letrec Env, y=S[t] in r (ucp3) letrec x=t in S[x] → S[t] where in the (ucp)-rules, x 6∈ FV(S, Env,t, r) and S is a surface context Fig. 4: Extra transformation rules 3 Properties of Reductions and Transformations In this section we prove properties about the reduction rules and the additional transforma- tion rules shown in Fig. 4 where we use the following unions: (gc) is the union of (gc1), (gc2); (gcW) is the union of (gcW1), (gcW2); (cpx) is the union of (cpx-in), (cpx-e); (cpcx) is the union of (cpcx-in), (cpcx-e); and (ucp) is the union of (ucp1), (ucp2), (ucp3). P A program transformation P is a binary relation on expressions, we write s − → t, if (s,t) ∈ P. For a set of contexts X, we write (X, P) for the closure of P w.r.t. the contexts in X, X,P P X,P i.e. s −−→ t iff there exists C ∈ X with C[s] − → C[t]. A step s −−→ t is internal if it is not a iX,P i,P normal order reduction step. We denote internal steps by −−→ or by −→ (for all contexts), iX,P X,P LRPw i.e. −−→ = −−→ \ −−−→. P As a further technique, we use so-called forking and commuting diagrams. Let −→ be a LRPw P program transformation. A forking diagram describes how overlappings r ←−−− s − → t can P0 ,∗ LRPw,∗ be closed by a sequence of the form r −−→ s0 ←−−−− t. A commuting diagram describes LRPw P LRPw,∗ P0 ,∗ how overlappings r −−−→ s − → t can be closed by a sequence of the form r −−−−→ s0 −−→ 0 t. Here P may be the transformation P or other transformations. The diagrams abstract from the concrete expressions r, s,t and thus the symbol · is used as a place-holder for the universal and existentially quantified expressions. A set of forking diagrams is complete LRPw P for transformation P if for every concrete overlapping r ←−−− s −→ t an applicable diagram is in the set, where applicable means that the expressions, reductions, and transformations exist. Accordingly, a set of commuting diagrams is complete if for every concrete sequence 77 LRPw P r −−−→ s − → t an applicable diagram is in the set. Often forking and commuting diagrams have a common representation and thus we will depict the diagrams only once. 3.1 Analyzing the Transformation (letw) Lemma 3.1. A complete set of forking and commuting diagrams for internal (letw)- transformations applied in reduction contexts can be read off the diagrams: · iR,b /· · iR,b /· iR,letw /· LRPw,a  · iR,letw0 /· LRPw,a LRPw,a ·  / · LRPw,a · LRPw,a LRPw,cp LRPw,cp ·  { LRPw,a LRPw,b    /· / · iR,b · · · iR,letw0 iR,letw0 b ∈ {letwn, letw0}, a arbitrary b ∈ {letwn, letw0}, a arbitrary a arbitrary Proof. The first diagram describes the commuting case and it also includes cases where a (letw-in)-transformation is flipped into an (letw-e)-transformation, if the normal order reduction is (LRPw,llet). The second diagram describes the case where the a-labeled ex- pression of the (letw)-transformation is removed by the normal order reduction. The third diagram describes the case where the internal (letw)-transformation becomes a normal- order reduction. The fourth diagram describes the case where an a-labeled expression is inside an abstraction which is copied by (LRPw,cp). If the transformation is a (letwn), then the transformations commute, but if the transformation is (letw0), then the transformation is duplicated, since it has to remove the a-label twice. iR,letw Lemma 3.2. If s −−−−→ t then s is a WHNF iff t is a WHNF. Proposition 3.3. The transformations (letw0) and (letwn) are correct. Proof. We use the context lemma (Lemma 2.7) and thus it suffices to show that when- letw ever s −−→ t, then for all reduction contexts: R[s]↓LRPw ⇐⇒ R[t]↓LRPw . We first show LRPw,k R[s]↓LRPw =⇒ R[t]↓LRPw : Assume that R[s] −−−−→ r where r is a WHNF. We show LRPw,k0 R[t] −−−−−→ r0 where r0 is a WHNF, and k0 ≤ k by induction on k. The base case k = 0 holds LRPw LRPw,k−1 LRPw,letw by Lemma 3.2. For the induction step let R[s] −−−→ r1 −−−−−−→ r. If R[s] −−−−−−→ R[t], LRPw,k−1 then r1 = R[t] and R[t] −−−−−−→ r and the claim holds. If the reduction is internal, then LRPw LRPw,letw apply a forking diagram to r1 ←−−− R[s] −−−−−−→ R[t]. For the first diagram, we have iR,letw LRPw LRPw,k−1 r1 −−−−→ r10 , R[t] −−−→ r10 and r1 −−−−−−→ r. Applying the induction hypothesis to r1 and LRPw,k00 LRPw,k0 r10 yields r10 −−−−−→ r0 where r0 is a WHNF and k00 ≤ k − 1. Thus R[t] −−−−−→ r0 where LRPw LRPw,k−1 r0 is a WHNF and k0 ≤ k. If the second diagram is applied, then R[t] −−−→ r1 −−−−−−→ r LRPw LRPw,k−2 and the claim holds. If the third diagram is applied, then R[t] −−−→ r2 −−−−−−→ r (where 78 LRPw r1 −−−→ r2 ) and the claim holds. In case of diagram (4), we apply the induction hypothe- LRPw,cp LRPw,k00 sis twice for each (iR, letw)-transformation, which shows that R[t] −−−−−→ r10 −−−−−→ r0 where r0 is a WHNF, k00 ≤ k − 1. Thus the claim holds. For proving R[t]↓LRPw =⇒ R[s]↓LRPw , let #cp(r) be the number of (LRPw,cp)-reductions in the normal order reductions from r to a WHNF and #cp(r) = ∞ if r ↑. Assume that LRPw,k R[t] −−−−→ r where r is a WHNF. We show R[s]↓LRPw and #cp(R[s]) ≤ #cp(R[t]) by induction on the measure (#cp(R[t]), k). For the base case (0,0) R[t] is a WHNF and thus by Lemma 3.2 also R[s] is a WHNF and the claim holds. For the induction step let (l, k) > LRPw LRPw,k−1 LRPw,letw (0, 0). Then R[t] −−−→ t 0 −−−−−−→ r where r is a WHNF. If R[s] −−−−−−→ R[t], then the claim holds, i.e. R[s]↓LRPw and #cp(R[s]) = #cp(R[t]). If the transformation is internal, then iR,letw LRPw we apply a commuting diagram to R[s] −−−−→ R[t] −−−→ t1 . For the second and the third LRPw,a diagram, the claim obviously holds. For the first diagram, there exists s1 s.t. R[s] −−−−→ s1 , iR,letw s1 −−−−→ s2 and the measure for t1 is (#cp(t1 ), k − 1) which is strictly smaller than (l, k) (since #cp(t1 ) ≤ l). Thus we can apply the induction hypothesis and derive s1 ↓LRPw and #cp(s1 ) ≤ #cp(t1 ). This shows R[s]↓LRPw and #cp(R[s]) ≤ #(R[t]). For the last diagram, we apply the induction hypothesis twice, which is possible since #cp(·) is strictly decreased. Proposition 3.4. For Amin ⊆ A ⊆ A: (letw0) ⊆ ≈A . Proof. We use the context lemma for improvement and since (letw0) is correct, it suffices letw0 to show that if s −−−→ t implies rlnA (R[s]) = rlnA (R[t]) for all reduction contexts R, s.t. R[s], R[t] are closed. Since (letw0) is correct, we know that rlnA (R[s]) = ∞ ⇐⇒ rlnA (R[t]) = ∞. So suppose that rlnA (R[s]) = n. We show rlnA (R[t]) = n by induction LRPw,k on a normal order reduction R[s] −−−−→ s0 where s0 is a WHNF. The base case holds by LRPw LRPw,k−1 LRPw,letw0 Lemma 3.2. For the induction step, let R[s] −−−→ s1 −−−−−−→ s0 . If R[s] −−−−−−−→ R[t], then rlnA (R[s]) = rlnA (R[t]) = rlnA (s1 ) and the claim holds. If the transformation is iR,letw0 internal, then we apply a forking diagram to s1 . For the first diagram we have s1 −−−−→ t1 and we apply the induction hypothesis to s1 and thus have rlnA (s1 ) = rlnA (t1 ). This also shows rlnA (R[s]) = rlnA (R[t]). For the second diagram the claim holds. For the third diagram the claim also holds, since the additional (LRPw,letw0)-reduction in the normal order reduction for R[s] is not counted in the rlnA -measure. For the fourth diagram we iR,letw0 iR,letw0 LRPw,cp have s1 −−−−→ s01 −−−−→ t1 ←−−−−− R[t]. We apply the induction hypothesis twice: For s1 we get rlnA (s1 ) = rlnA (s01 ) and for s01 we get rlnA (s01 ) = rlnA (t1 ) which finally shows rlnA (R[t]) = rlnA (t1 ) = rlnA (s1 ) = rlnA (R[s]). Proposition 3.5. For Amin ⊆ A ⊆ A, (letwn) ⊆ A . Proof. We use the context lemma for improvement. We already know that (letwn) is cor- letwn rect. We show that if s −−−→ t, then for all reduction contexts R s.t. R[s] and R[t] are closed: rlnA (R[s]) = rlnA (R[t]) or rlnA (R[s]) = 1 + rlnA (R[t]). Since (letwn) is correct, we 79 know that rlnA (R[s]) = ∞ ⇐⇒ rlnA (R[t]) = ∞. So suppose that rlnA (R[s]) = n. We show LRPw,k rlnA (R[t]) = n or rlnA (R[t]) = n+1 by induction on a normal order reduction R[s] −−−−→ s0 where s0 is a WHNF. The base case holds by Lemma 3.2. For the induction step, let LRPw LRPw,k−1 LRPw,letwn R[s] −−−→ s1 −−−−−−→ s0 . If R[s] −−−−−−−→ R[t], then rlnA (R[s]) = 1 + rlnA (R[t]) and the claim holds. If the transformation is internal, then we apply a forking diagram to iR,letwn s1 . For the first diagram we have s1 −−−−→ t1 and we apply the induction hypothesis to s1 and thus have rlnA (s1 ) = 1 + rlnA (t1 ) or rlnA (s1 ) = rlnA (t1 ). This also shows rlnA (R[s]) = 1 + rlnA (R[t]) or rlnA (R[s]) = rlnA (R[t]). For the second diagram we have rlnA (R[s]) = rlnA (R[t]). For the third diagram we have rlnA (R[s]) = 1 + rlnA (R[t]). The fourth diagram is not applicable, since the given transformation is (letwn). 3.2 Analyzing the Transformation (gcW) We prove correctness and (invariance w.r.t. ≈) for (gcW), the transformation which per- forms garbage collection of a:=n-bindings which have no corresponding [a] -label. Lemma 3.6. Complete sets of forking and commuting diagrams for (S,gcW) are: · S,gcW /· S,gcW / S,gcW 2 /; · · · LRPw,a LRPw,a  / · LRPw,a  | LRPw,a LRPw,lll  S,gcW 2 · · · S,gcW a arbitrary a arbitrary Proof. The first diagram covers the case where the transformation and the reduction com- mute. There are also cases where a (gcW2) becomes a (gcW1)-transformation, for exam- S,gcW 2 ple in the transformation letrec x=(letrec a:=n in s) in r −−−−→ letrec x=s in r LRPw,llet where letrec x=(letrec a:=n in s) in r −−−−−→ letrec x=s, a:=n in r. The second diagram covers the case where the (gcW)-redex is removed by the normal order reduction, e.g. if it is in an unused alternative of case or inside the first argument of seq. The last diagram covers the case where the redex of (LRPw,lll) is removed by (gcW2). The following lemma describes the base case for the (gcW)-transformation: S,gcW Lemma 3.7. Let s −−−→ t. If s is a WHNF, then t is a WHNF; and if t is a WHNF, then LRPw,llet,0∨1 s −−−−−−−−→ s0 where s0 is a WHNF. Proposition 3.8. The transformation (gcW) is correct and for Amin ⊆ A ⊆ A: (gcW) ⊆ ≈A . S,gcW Proof. We first show correctness. Let s −−−→ t. For s↓LRPw =⇒ t↓LRPw , we use induction LRPw,k on k in s −−−−→ s0 where s0 is a WHNF. For k = 0, Lemma 3.7 shows t↓LRPw . For the induc- LRPw,a S,gcW tion step, we apply a forking diagram. For the first diagram, we have s −−−−→ s1 , s1 −−−→ 80 LRPw,a t1 , t −−−−→ t1 . The induction hypothesis for s1 and t1 shows t1 ↓LRPw and thus t↓LRPw . For LRPw,lll gcW 2 the second diagram, t↓LRPw holds. For the third diagram, we have s −−−−−→ s1 −−−→ t. We apply the induction hypothesis to s1 and t which shows t↓LRPw . For t↓LRPw =⇒ s↓LRPw , LRPw,k we use an induction on k in t −−−−→ t 0 where t 0 is a WHNF. For k = 0, Lemma 3.7 shows s↓LRPw . For the induction step, we apply a commuting diagram. For diagrams (1) and (2), the cases are analogous to the previous part. For the third diagram, we apply the diagram as long as possible which terminates, since there are no infinite sequences of (LRPw, , lll)- LRPw,lll,+ reductions, i.e. we derive s0 with either s −−−−−−→ s0 where s0 is a WHNF and thus s↓LRPw , or we apply the first or second diagram to t and s0 , and then the induction hypothesis (in case of diagram 1). In any case we derive s↓LRPw . The two parts and the context lemma for ∼c show that (gcW ) is correct. Now we consider S,gcW improvement. Let s −−−→ t. We show rlnA (s) = rlnA (t). The context lemma for im- provement then implies (gcW ) ⊆ ≈A . Since (gcW ) is correct, we already have rlnA (s) = LRPw,k ∞ ⇐⇒ rlnA (t) = ∞. Now let s↓LRPw s0 (where s −−−−→ s0 ) and rlnA (s) = n. We show rlnA (t) = n by induction on k. If k = 0, then Lemma 3.7 shows rlnA (s) = 0 = rlnA (t). If k > 0, then we apply the forking diagrams. The cases are analogous as for the correct- ness proof, where have to verify, that the first and the second diagram do neither introduce nor remove normal order reductions, and the third diagram may only remove (LRPw, lll)- reductions which are not counted by the rlnA -measure. 3.3 Properties of the Reduction Rules and Additional Transformations Proposition 3.9. All reduction rules are correct. Proof. For the (letwn)-rules this is already proved. For the other rules, correctness was shown in the untyped calculus LR in [SSSS08], which can be directly transferred to LRP. However, LRPw has shared-work decorations and the (letwn)-rules as normal order re- duction. To keep the proof compact, we only consider these new cases. The reasoning to show correctness of the reduction rules in LRPw is the same as for LR, since all additional diagrams between an internal transformation step (i, b) and a (LRPw, letw)-reduction are: i,b /· · i,b /· · LRPw,a LRPw,a LRPw,letw0   /  · · · LRPw,letw0 i,b LRPw,b  · a ∈ {letwn, letw0}, b ∈ {lbeta, cp, case, seq, lll} b ∈ {lbeta, cp, case, seq, lll} In the first case the transformations commute, in the second case the internal transforma- tion becomes a normal order reduction after removing the a label. These cases are already covered by the diagram proofs in LR (see [SSSS08]) and thus can easily be added. The following results from [SSSS08, SSS15a] also hold in LRPw, since the overlappings for (letw) and the corresponding transformation are analogous to already covered cases. 81 Theorem 3.10. Let t be a closed LRPw-expression with t↓LRPwt0 and Amin ⊆ A ⊆ A. C, a 1. If t −−−→ t 0 , and a ∈ A, then rlnA (t) ≥ rlnA (t 0 ). C,cp 2. If t −−→ t 0 , then rlnA (t) = rlnA (t 0 ). S,a 3. If t −→ t 0 , and a ∈ A, then rlnA (t) ≥ rlnA (t 0 ) and rlnA (t 0 ) ≥ rlnA (t) − 1 if a ∈ A, and rlnA (t 0 ) = rlnA (t) if a 6∈ A. C, a 4. If t −−−→ t 0 , and a ∈ {lll, gc}, then rlnA (t) = rlnA (t 0 ). C,a 5. If t −−→ t 0 , and a ∈ {cpx, xch, cpcx, abs, lwas}, then rlnA (t) = rlnA (t 0 ). C, ucp 6. If t −−−→ t 0 , then rlnA (t) = rlnA (t 0 ). S,a Corollary 3.11. Let Amin ⊆ A ⊆ A. If s −→ s0 where a is any rule from Figs. 3 and 4, then C,a s0 A s. If s −−→ s0 where a is (lll), (cp), (letw0) or any rule of Fig. 4, then s0 ≈A s. Proof. The claims follow from Theorem 3.10 and the context lemma. For (gcW) this fol- lows from Proposition 3.8. For (letw0) it follows from Proposition 3.4, and for (letwn) it follows from Proposition 3.5. 4 On the Relationship Between LRPw and LRP In this section we analyze the relationship between the calculus LRP and its extension LRPw. We show that the calculi are isomorphic w.r.t. the equivalence classes of contextual equivalence. This result is more or less obvious, since adding work decorations to expres- sion does not change their termination behavior. We then investigate the more interesting question whether such an isomorphism also exists w.r.t. the equivalence classes of the im- provement relation ≈A . We show that if (seq) 6∈ A, then such an isomorphism exists, while it does not exist if A = A. For A = A, we have to leave open the question whether the embedding of LRP into LRPw is conservative w.r.t. ≈A (i.e. s ≈A,LRP t =⇒ s ≈A,LRPw t). We conjecture that this conservativity holds, but we did not find a proof. 4.1 Contextual Equivalence in LRP and LRPw We define a translation from expressions with work-decorations into decoration-free ex- pressions: 82 Definition 4.1. For an LRPw-expression t, the expression rmw(t) is derived from t by re- moving the work-syntax, i.e. rmw(letrec x1 =s1 , . . . , xn =sn , a1 :=n1 , . . . , am :=nm in s) = letrec x1 =rmw(s1 ), . . . , xn =rmw(sn ) in rmw(s), for m ≥ 0 and n ≥ 1; rmw(s[a] ) = rmw(s); rmw(letrec a1 :=n1 , . . . , am :=nm in s) = rmw(s); and for all other language constructs f : rmw( f [s1 , . . . , sn ]) = f [rmw(s1 ), . . . , rmw(sn )]. LRPw LRPw Since t −−−→ t 0 implies rmw(t) = rmw(t 0 ) or rmw(t) −−−→ rmw(t 0 ), we have: Proposition 4.2. Let t be an expression in LRPw, then t ↓LRPw ⇐⇒ rmw(t) ↓LRPw . An immediate consequence is the following theorem, where it is necessary to observe that for any LRPw-context C and LRPw-expression s: rmw(C[s]) = rmw(C)[rmw(s)] and rmw(C) is also an LRP-context. Theorem 4.3. The embedding of LRP into LRPw w.r.t. ∼c is conservative and the calculi LRP and LRPw are isomorphic 4.2 Comparing the Improvement Relations of LRP and LRPw We show that the embedding of LRP into LRPw is an isomorphism w.r.t. ≈A if seq 6∈ A. Let (enc) be the transformation enc letrec a:=n, Env in s −−→ letrec xa :=idn+1 , Env[seq xa t/t [a] ] in s[seq xa t/t [a] ] where [seq xa t/t [a] ] means that every subterm t [a] is replaced by seq xa t, and idk abbre- | .{z viates id . . id}, and id = λ x.x. k Lemma 4.4. Complete sets of forking and commuting diagrams for (R, enc) are: · R,enc /· · R,enc /· · R,enc /· LRPw,a  LRPw,a  LRPw,lbeta LRPw,seq · / · · LRPw,letw0 ·  R,enc LRPw,letwn  S,cp S,gc,∗ ·  / · ·  / · S,gc S,gcW,∗ · R,enc R,enc Lemma 4.5. If s −−−→ t then s is a WHNF iff t is a WHNF. Proposition 4.6. Let Amin ⊆ A ⊂ A, s.t. seq 6∈ A. Then (enc) ⊆ ≈A . R,enc Proof. We show correctness using the context lemma. Let s −−−→ t. We prove s↓LRPw =⇒ LRPw,k t↓LRPw by induction on k in s −−−−→ sk where sk is a WHNF. The case k = 0 holds by R,enc LRPw Lemma 4.5. For the induction step, we apply a forking diagram to t ←−−− s −−−→ s1 : 83 LRPw LRPw LRPw,k−1 For the first diagram, we have t −−−→ t1 ←−−− s1 . Since s1 −−−−−−→ sk , the induction LRPw,lbeta hypothesis shows t1 ↓LRPw and thus t↓LRPw . For the second diagram, we have t −−−−−−→ C,cp C,gc enc t 0 −−→−−→ t1 s.t. s1 −−→ t1 . The induction hypothesis shows t1 ↓LRPw and correctness of LRPw,seq (cp) and (gc) shows t 0 ↓LRPw and thus t↓LRPw . For the last diagram, we have t −−−−−→ C,gc,∗ C,gcW,∗ t 0 −−−→←−−−− s1 . Correctness of (gc) and (gcW) implies t 0 ↓LRPw and thus t↓LRPw . LRPw,k We prove t↓LRPw =⇒ s↓LRPw by induction on (rlnA (t), k) where t −−−−→ tk and tk is a WHNF. For the case (0, 0), Lemma 4.5 shows the claim. For the induction step, we R,enc LRPw apply a commuting diagram to s −−−→ t −−−→ t1 . If rlnA (t) = 0, but k > 0, then only LRPw LRPw the first diagram is applicable. For the first diagram, we have s −−−→ s1 s.t. s1 −−−→ t1 . LRPw,k−1 Since t1 −−−−−−→ tk , and rlnA (t1 ) ≤ rlnA (t), the induction hypothesis shows s1 ↓LRPw LRPw,letwn R,enc S,gc S,cp and thus s↓LRPw . For the second diagram, we have s −−−−−−−→ s0 −−−→ s00 ←−−←−− t1 . Then rlnA (t1 ) < rlnA (t) and by Theorem 3.10 rlnA (s00 ) < rlnA (t). Thus the induction hypothesis applied to s00 shows s0 ↓LRPw and thus s↓LRPw . For the third diagram, we have LRPw,letw0 S,gcW,∗ S,gc,∗ s −−−−−−−→ s0 −−−−→←−−− t1 . Correctness of (letw0), (gcW) and (gc) shows s↓LRPw . R,enc For proving (enc) ⊆ ≈A , we use Lemma 2.8 and show that if s −−−→ t, then rlnA (s) = LRPw,k rlnA (t). Clearly, rlnA (s) = ∞ ⇐⇒ rlnA (t) = ∞. So let s −−−−→ sk where sk is a WHNF. By induction on k, we show rlnA (s) = rlnA (t). If k = 0, then Lemma 4.5 implies that t is a LRPw WHNF and rlnA (s) = 0 = rlnA (t). If k > 0, then we apply a forking diagram to s1 ←−−− R,enc LRPw LRPw LRPw,k−1 s −−−→ t. For the first diagram, we have t −−−→ t1 s.t. s1 −−−→ t1 . Since s1 −−−−−−→ sk , the induction hypothesis shows rlnA (s1 ) = rlnA (t1 ) and thus rlnA (s) = rlnA (t). For LRPw,lbeta C,cp C,gc enc the second diagram, we have t −−−−−−→ t 0 −−→−−→ t1 s.t. s1 −−→ t1 . Clearly, rlnA (s) = 1 + rlnA (s1 ) and rlnA (t) = 1 + rlnA (t 0 ). The induction hypothesis shows rlnA (s1 ) = rlnA (t1 ) and Theorem 3.10 shows rlnA (t1 ) = rlnA (t 0 ). For the last diagram, we have LRPw,seq C,gc,∗ C,gcW,∗ t −−−−−→ t 0 −−−→←−−−− s1 . Then rlnA (s) = rlnA (s1 ) and (since seq 6∈ A) rlnA (t) = rlnA (t 0 ). Finally, Theorem 3.10 and Proposition 3.8 show rlnA (t 0 ) = rlnA (s1 ). Theorem 4.7. Let Amin ⊆ A ⊂ A, s.t. seq 6∈ A. Then every decorated expression s can be represented as an LRP-expression s0 with s ≈A s0 . This means the embedding of LRP into LRPw is an isomorphism w.r.t. ≈A . Proof. It suffices to show that s ≈LRP,A t implies s ≈LRPw,A t. Let s ≈LRP,A t and let C be an LRPw-context. Then rlnA (C[s]) = rlnA (C[t]) in LRPw and thus s ≈LRPw,A t: We apply (enc)-transformations to C[s] and C[t] (without changing s,t, since they do neither contain a:=n labels nor ·[a] labels.) until we get expressions C0 [s], C0 [t] s.t. both are free of bindings a:=n and labels ·[a] . We have C0 [s] ≈A C[s] and C0 [t] ≈A C[t] by Proposition 4.6 and thus rlnA (C0 [s]) = rlnA (C[s]) and rlnA (C0 [t]) = rlnA (C[t]). Since C0 is an LRP-context, the precondition s ≈LRP,A t shows rlnA (C[s]) = rlnA (C[t]) which shows the claim. We show that the isomorphism property w.r.t. ≈A does not hold for A = A: 84 Proposition 4.8. Let A = A and let c1 and c2 be different constants. The expression [a] [a] letrec a:=1 in (Pair c1 c2 ) is not equivalent w.r.t. ≈A to any LRP-expression. Proof. Assume there is such an expression s. Then s ∼c (Pair c1 c2 ) and rlnA (s) = 0, so we can assume that s is a WHNF. Using correctness w.r.t. ≈A of program transformations and c1 6∼c c2 , we can assume that s is of the form letrec x=s1 , y=s2 , Env in (Pair x y). We see that s1 as well as s2 alone have rlnA -count 1 in the environment. Using invariance of (lll), (cpx) and (gc) w.r.t. ≈A , we can assume that s1 , s2 are applications, seq- or case- expressions, and evaluation of them requires at least one rlnA -reduction that is indepen- dent of the other to become a WHNF. Hence, the context let z=[·] in seq (fst z) (snd z) [a] [a] applied to letrec a:=1 in (Pair c1 c2 ) requires 6 = 5 + 1 rlnA -steps: 2 for fst, 2 for snd, 1 for seq, and 1 for evaluating a:=1, whereas s requires at least 5+2: the 2 reductions are the minimum to reach a WHNF for the first as well as for the second component. Even though the calculi are not isomorphic w.r.t. ≈A , it may be the case that the em- bedding of LRP into LRPw w.r.t. ≈A is conservative (i.e. for all LRP-expressions s,t : s ≈A,LRP t =⇒ s ≈A,LRPw t). We conjecture that this conservativity holds, but we were unable to show it, since we did not find a proof. A simple approach to encode LRPw- expressions and -contexts into LRP-expressions and -contexts fails due to the example given in Proposition 4.8. So conservativity remains an open problem. Conservativity would allow to lift results on improvements from LRP to LRPw more easily. However, the fol- lowing lemma shows that ≈A -relations proved LRPw can be used for reasoning on ≈A in LRP. It holds, since every LRP-context is also an LRPw-context and on decoration-free expressions the rln-length is the same in both calculi. Lemma 4.9. Let Amin ⊆ A ⊆ A. Let s,t be LRP-expressions s.t. s A,LRPw t. Then also s A,LRP t holds. 5 Computation Rules for Work Decorations In this section we develop computation rules for work decorations which allow to shift-up or merge such decorations. Such computation rules are a helpful tool when reasoning on improvements. As a first step we introduce a notation for (unshared) work decorations: Definition 5.1. If n ∈ N, then we write s[n] where [n] is called a (unshared) work decoration. The semantics of s[n] is letrec a:=n in s[a] where a is a fresh label. We show that these work-decorations are redundant: Proposition 5.2. Work decorations s[n] can be encoded as letrec x=(id n ) in (x s). Proof. Let Amin ⊆ A ⊆ A. We show s[n] = letrec a:=n in s[a] ≈A letrec x=(id n ) in (x s): By the context lemmas for ∼c and ≈A , it suffices to show r1 := R[letrec a:=n in s[a] ] ∼c R[letrec x=(id n ) in (x s)] := r2 and rlnA (r1 ) = rlnA (r2 ) for all reduction contexts 85 R, A case analysis of the structure of context R, shows that there always is a term r3 LRPw,lll,∗ LRPw,letwn,n LRPw,letw0 gcW LRPw,lll,∗ LRPw,lbeta LRPw,llet s.t. r1 −−−−−−→−−−−−−−−→−−−−−−−→−−→ r3 and r2 −−−−−−→ (−−−−−−→−−−−−→)n−1 LRPw,cp LRPw,lbeta LRPw,lll,∗ ucp gc −−−−−→−−−−−−→−−−−−−→−−→− → r3 . which shows r1 ∼c r3 ∼c r2 and rlnA (r1 ) = n + rlnA (r3 ) = rlnA (r2 ), since (ucp), (gc), and (gcW) are invariant w.r.t. ≈. A corollary from the theorem on reduction lengths (Theorem 3.10) is: S − s0 by any reduction Corollary 5.3. Let Amin ⊆ A ⊆ A and let S be a surface context. If s → 0 0[1] or transformation rule from Figs. 3 and 4, then s A s and s A s . The following theorem summarizes several computation rules for work decorations. It will be proved in the remainder of this section in a series of lemmas. Theorem 5.4. Let Amin ⊆ A ⊆ A. LRPw,a 1. If s −−−−→ t with a ∈ A, then s ≈A t [1] , and if a 6∈ A, then s ≈A t. 2. R[letrec a:=n in s[a] ] ≈A letrec a:=n in R[s][a] and thus R[s[n] ] ≈A R[s][n] . 3. rlnA (letrec a:=n in s[a] ) = n + rlnA (s0 ) where s0 is s where all [a] -labels are re- moved. In particular this also shows rlnA (s[n] ) = n + rlnA (s). 4. For every reduction context R: rlnA (R[letrec a:=n in s[a] ]) = n + rlnA (R[s0 ]) where s0 is s where all [a] -labels are removed. In particular, rlnA (R[s[n] ]) = n + rlnA (R[s]). 5. (s[n] )[m] ≈A s[n+m] 6. S[letrec a:=n in T [s[a] ]] A letrec a:=n in S[T [s]][a] holds for all surface contexts S, T , and if S[T ] is strict, also S[letrec a:=n in T [s[a] ]] ≈A letrec a:=n inS[T [s]][a] . 7. letrec a:=n, b:=m in (s[a] )[b] ≈A letrec a:=n, b:=m in (s[b] )[a] 8. letrec a:=n in (s[a] )[a] ≈A letrec a:=n in (s[a] ) 9. Let S[·, . . . , ·] be a multi-context where all holes in S are in surface position. Then [a] [a] letrec a:=n in S[s1 , . . . , sn ] A letrec a:=n in S[s1 , . . . , sn ][a] . If some hole in S is [a] [a] in strict position, then letrec a:=n in S[s1 , . . . , sn ] ≈A letrec a:=n in S[s1 , . . . , sn ][a] . Proof. Item (1) is proved in Theorem 5.11. Item (2) is proved in Proposition 5.6. Item (3) is proved in Lemma 5.7. Item (4) is proved in Corollary 5.8. Item (5) is proved in Propo- sition 5.9. Item (6) is proved in Corollary 5.15. Items(7) and (8) are proved in Proposi- tion 5.16. Item (9) is proved in Proposition 5.17. LRPw,letwn Lemma 5.5. Let Amin ⊆ A ⊆ A. If s −−−−−−−→ t, then s ≈A t [1] 86 Proof. We use the context lemma for improvement and thus have to show for all reduction contexts R: rlnA (R[s]) = rlnA (R[letrec a:=1 in t [a] ]). A case analysis on the reduc- LRPw,lll,∗ LRPw,letwn tion context R shows that there exists an expression r s.t. R[s] −−−−−−→−−−−−−−→ r and LRPw,lll,∗ LRPw,letwn LRPw,letw0 gcW LRPw,lll,∗ R[letrec a:=1 in t [a] ] −−−−−−→−−−−−−−→−−−−−−−→−−→−−−−−−→ r. Since all rules except of (LRPw, letwn) leave the rlnA -measure unchanged, the claim is proved. Proposition 5.6. R[letrec a:=n in s[a] ] ≈A letrec a:=n in R[s][a] for Amin ⊆ A ⊆ A. Proof. The equation R[letrec a:=n in s[a] ] ≈A letrec a:=n in R[s][a] holds by in- duction on n: The case n = 0 holds, since (letw0) ⊆≈A . For the induction step, a case distinction of the context R, Lemma 5.5, the induction hypothesis, and (lll) ⊆ ≈A show R[letrec a:=n in s[a] ] ≈A letrec a:=n in R[s][a] for n > 0. Lemma 5.7. Let Amin ⊆ A ⊆ A. Then rlnA (letrec a:=n in s[a] ) = n + rlnA (s0 ) where s0 is s where all [a] -labels are removed. In particular this shows rlnA (s[n] ) = n + rlnA (s). LRPw,letwn,n C,letw0,∗ Proof. The reduction letrec a:=n in s[a] −−−−−−−−→−−−−−→ letrec a:=0 in s0 shows rlnA (s[n] ) = n + rlnA (letrec a:=0 in s). Finally, (gcW ) ⊆ ≈A shows the claim. Corollary 5.8. For Amin ⊆ A ⊆ A, the equation rlnA (R[letrec a:=n in s[a] ]) = n + rlnA (R[s0 ]) holds for all reduction contexts R, where s0 is s where all [a] -labels are re- moved. In particular, this shows rlnA (R[s[n] ]) = n + rlnA (R[s]). Proposition 5.9. Let Amin ⊆ A ⊆ A. Then (s[n] )[m] ≈A s[n+m] . Proof. Clearly, (s[n] )[m] ∼c s[n+m] . Since Corollary 5.8 implies rlnA (R[(s[n] )[m] ]) = m + n + rlnA (R[s]) = rlnA (R[s[n+m] ]) for all reduction contexts R, the context lemma for im- provement shows the claim. LRPw,a Lemma 5.10. If s −−−−→ t and a ∈ {lbeta, case-c, seq-c}, then s ≈A t [1] if a ∈ A. Proof. By the context lemma for improvement it suffices to show for all reduction contexts R: rlnA (R[s]) = rlnA (R[t [1] ]), if a ∈ A. By Corollary 5.8 we have rlnA (R[t [1] ]) = 1 + a rlnA (R[t]) and thus it suffices to show rlnA (R[s]) = 1 + rlnA (R[t]). Let s0 → − t0 for a ∈ {lbeta, case-c, seq-c} and assume a ∈ A. We verify all cases: LRPw,a If s = R− − − 0 [s0 ] and t = R0 [t0 ] for a weak reduction context R0 , then R[s] −−−−→ R[t]. If s = letrec Env in R− − − 0 [s0 ] and t = letrec Env in R0 [t0 ] where R0 is a weak reduction context, then we consider the cases for the context R: If R is a weak reduction context, then LRPw,lll,∗ LRPw,a LRPw,lll,∗ R[s] −−−−−−→ letrec Env in R[R− 0 [s0 ]] −−−−→ letrec Env in R[R− 0 [t0 ]] ←−−−−−− R[t]. If R = letrec Env0 in R0 or R = letrec Env0 , u=R0 in r, for a weak reduction context LRPw,lll,∗ LRPw,a C,lll,∗ R0 , then rln(R[s]) = rln(R[letrec a:=1 in t [a] ]), since R[s] −−−−−−→−−−−→←−−− R[t]. 87 Finally, if s = letrec Env, y=R− − 0 [s0 ] in u0 and t = letrec Env, y=R0 [t0 ] in u0 where − R0 is a weak reduction context, then a case analysis on the structure of R again shows that LRPw,lll,∗ LRPw,a C,lll,∗ R[s] −−−−−−→−−−−→←−−− R[t] which shows the claim. LRPw,a Theorem 5.11. Let Amin ⊆ A ⊆ A. If s −−−−→ t with a ∈ A, then s ≈ t [1] . Proof. For (letwn) the claim is Lemma 5.5, for (case-c), (seq-c), and (lbeta) the claim is Lemma 5.10. Other (case)- and (seq)-reductions can be expressed by a (case-c)- or (seq- c)-reduction and (cpcx)-, (gc)-, and (lll)-transformations which are invariant w.r.t. ≈A . Proposition 5.12. For A with Amin ⊆ A ⊆ A. and any strict surface context S the equation S[letrec a:=n in s[a] ] ≈A letrec a:=n in S[s][a] holds. In particular, S[s[n] ] ≈A S[s][n] . Proof. If S[r] ∼c ⊥ for all r, then S[letrec a:=n in s[a] ] ∼c ⊥ ∼c letrec a:=n in S[s][a] . Since for any reduction context R, the expressions R[⊥], R[S[letrec a:=n in s[a] ]], and R[letrec a:=n in S[s][a] ] diverge, the equation rlnA (R[letrec a:=n in s[a] ]) = ∞ = rlnA (R[letrec a:=n in S[s][a] ]) holds. Now the context lemma for improvement shows S[letrec a:=n in s[a] ] ≈A letrec a:=n in S[s][a] . LRPw,k Otherwise, verify that for all r reduction contexts R a reduction R[S[r]] −−−−→ R0 [r] ex- LRPw,k ists, where R0 is a reduction context and rlnA (R[S[r]] −−−−→ R0 [r]) = m for some m ≤ k. For R[S[letrec a:=n in s[a] ]], the equation rlnA (R[S[letrec a:=n in s[a] ]]) = m + rlnA (R0 [letrec a:=n in s[a] ]) holds. Applying Proposition 5.6 shows that the equation rlnA (R0 [letrec a:=n in s[a] ]) = rlnA (letrec a:=n in R0 [s][a] ) holds. By Lemma 5.7 we have rlnA (letrec a:=n in R0 [s][a] ) = n + rln(R0 [s0 ]) where s0 is s where all [a] - labels are removed. Thus rlnA (R[S[letrec a:=n in s[a] ]]) = m + n + rlnA (R0 [s0 ]) For R[letrec a:=n in S[s][a] ], we have by Corollary 5.8 rlnA (R[letrec a:=n in S[s][a] ]) = n + rlnA (R[S[s0 ]]) where s0 is s where all [a] -labels are removed. Since rlnA (R[S[s0 ]]) = m + rlnA (R0 [S[s0 ]), we have rlnA (R[letrec a:=n in S[s][a] ]) = n + m + rlnA (R0 [S[s0 ]). Thus we have shown rlnA (R[S[letrec a:=n in s[a] ]]) = rlnA (R[letrec a:=n in S[s][a] ]). Now the context lemma for improvement shows the claim. Proposition 5.13. For all A with Amin ⊆ A ⊆ A and all surface contexts S the inequations S[letrec a:=n in s[a] ] A letrec a:=n in S[s][a] and S[s[n] ] A S[s][n] hold. Proof. Let R be a reduction context. If R[S] is strict, then by applying Proposition 5.12 we derive rlnA (R[S[letrec a:=n in s[a] ]]) ≤ rlnA (R[letrec a:=n in S[s][a] ]). If R[S] is non-strict, then rlnA (R[S[r]]) = mR for any R and where mR only depends on the context R[S]. Then rlnA (R[S[letrec a:=n in s[a] ]]) = mR . From Corollary 5.8 we have rlnA (R[letrec a:=n in S[s][a] ]) = n + rlnA (R[S[s0 ]]) where s0 is s where all [a] -labels are removed. Thus, the equation rlnA (R[letrec a:=n in S[s][a] ]) = n + mR holds. Since S[letrec a:=n in s[a] ] ∼c letrec a:=n in S[s][a] (by correctness of (letw) and (gcW )), the context lemma for improvement shows the claim. 88 Proposition 5.14. Let Amin ⊆ A ⊆ A and S be a surface context. Then the inequation letrec a:=n in S[s[a] ] A letrec a:=n in S[s][a] holds, and if S is strict, then the equa- tion letrec a:=n in S[s[a] ] ≈A letrec a:=n in S[s][a] holds. Proof. First assume that S is strict. Then S0 := R[letrec a:=n in S[·] for all reduction contexts R is also strict. If S0 [r] ∼c ⊥ for all r, then rlnA (R[letrec a:=n in S[s[a] ])) = ∞ and rlnA (R[letrec a:=n in S[s])[a] ) = n + rlnA (R[S[s]]) = n + ∞ = ∞. Now assume that S is not strict. Let R be a reduction context. By (lll)-transformations we have R[letrec a:=n in S[s[a] ]] ≈A letrec a:=n in R[S[s[a] ]. If R[S[·]] is strict, then we have letrec a:=n in R[S[s[a] ]] ≈A letrec a:=n in R[S[s]][a] (since R[S[·]] is a strict surface context) and letrec a:=n in R[S[s]][a] ≈A letrec a:=n in R[S[s][a] ]. By (lll)- transformations we have letrec a:=n in R[S[s][a] ] ≈A R[letrec a:=n in S[s][a] ] and rlnA (R[letrec a:=n in S[s[a] ]]) = rlnA (R[letrec a:=n in S[s][a] ]). If R[S[·]] is non- strict, then rlnA (R[S[r]]) = mR for any r and where mR only depends on R[S]. Then rlnA (R[letrec a:=n in S[s[a] ]]) = rlnA (letrec a:=n in R[S[s[a] ]]]) = mR . We also have rlnA (R[letrec a:=n in S[s][a] ]) = n + rlnA (R[S[s]]) = n + mR by Corollary 5.8. Thus, in any case rlnA (R[letrec a:=n in S[s[a] ]]) ≤ rlnA (R[letrec a:=n in S[s][a] ]) and the context lemma for improvement shows the claim. Corollary 5.15. Let Amin ⊆ A ⊆ A and let S1 , S2 be surface contexts. Then the inequation S1 [letrec a:=n in S2 [s[a] ]] A letrec a:=n in S1 [S2 [s]][a] holds and if S1 [S2 ] is strict, also the equation S1 [letrec a:=n in S2 [s[a] ]] ≈A letrec a:=n in S1 [S2 [s]][a] . Proposition 5.16. The equations letrec a:=n in (s[a] )[a] ≈A letrec a:=n in (s[a] ) and letrec a:=n, b:=m in (s[a] )[b] ≈A letrec a:=n, b:=m in (s[b] )[a] hold for all A with Amin ⊆ A ⊆ A. Proof. For the first part, the expressions are clearly contextually equivalent. We have rlnA (R[letrec a:=n in (s[a] )[a] ]) = n + rlnA (R[s0 ]) = rlnA (R[letrec a:=n in (s[a] )]) by Corollary 5.8, for all reduction context R, where s0 is s where all [a] -labels are removed. Now the context lemma for improvement shows the claim. For the second part, let R be a reduction context. Since (llet) ⊆ ≈A , the equation R[letrec a:=n, b:=m in (s[a] )[b] ] ≈A R[letrec b:=m in (letrec a:=n in (s[a] )[b] ]) holds. Applying Corollary 5.8 two times implies rlnA (R[letrec b:=m in (letrec a:=n in (s[a] )[b] ])) = m+n·rlnA (R[s00 ]) where s00 is s where the labels [a] and [b] are removed. Completely analogously it can be shown that also the equation rlnA (R[letrec a:=n, b:=m in (s[b] )[a] ]) = n + m + rlnA (R[s00 ]) holds. Clearly, letrec a:=n, b:=m in (s[a] )[b] ∼c letrec a:=n, b:=m in (s[b] )[a] . Thus the con- text lemma for improvement shows the claim. Repeated application of Corollary 5.15 and Proposition 5.16 shows: Proposition 5.17. Let Amin ⊆ A ⊆ A and S[·, . . . , ·] be a multi-context where all holes are in [a] [a] surface position. Then letrec a:=n in S[s1 , . . . , sn ] A letrec a:=n in S[s1 , . . . , sn ][a] . 89 If some hole ·i with i ∈ {1, . . . , n} is in strict position in S[., . . . , .], then also the equation [a] [a] letrec a:=n in S[s1 , . . . , sn ] ≈A letrec a:=n in S[s1 , . . . , sn ][a] holds. 6 Conclusion We introduced and analyzed the calculus LRPw – an extension of LRP by scoped work decorations. By proving several computation rules for the decoration, we lay the founda- tions for reasoning about program improvements using shared work decorations. Future work is to use the computation rules and to develop proof techniques to show that several program transformations are improvements. References [MS99] Moran, Andrew; Sands, David: Improvement in a Lazy Context: An operational theory for call-by-need. In: Proc. POPL 1999. ACM Press, pp. 43–56, 1999. [Pi00] Pitts, Andrew M.: Parametric Polymorphism and Operational Equivalence. Math. Struc- tures Comput. Sci., 10:321–359, 2000. [Pi02] Pierce, Benjamin C.: Types and Programming Languages. The MIT Press, 2002. [SS14] Schmidt-Schauß, Manfred; Sabel, David: Contextual Equivalences in Call-by-Need and Call-By-Name Polymorphically Typed Calculi (Preliminary Report). In: Proc. WPTE 2014. volume 40 of OASICS. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, pp. 63–74, 2014. [SSS10] Schmidt-Schauß, Manfred; Sabel, David: On generic context lemmas for higher-order calculi with sharing. Theoret. Comput. Sci., 411(11-13):1521 – 1541, 2010. [SSS15a] Schmidt-Schauß, Manfred; Sabel, David: Improvements in a Functional Core Language with Call-By-Need Operational Semantics. Frank re- port 55, Institut für Informatik, Goethe-Universität Frankfurt am Main, 2015. http://www.ki.cs.uni-frankfurt.de/papers/frank/. [SSS15b] Schmidt-Schauß, Manfred; Sabel, David: Improvements in a Functional Core Language with Call-By-Need Operational Semantics. In (Albert, Elvira, ed.): Proc. PPDP ’15. ACM, New York, NY, USA, pp. 220–231, 2015. [SSS15c] Schmidt-Schauß, Manfred; Sabel, David: Sharing-Aware Improvements in a Call-by- Need Functional Core Language. presented at IFL 2015, 2015. [SSS15d] Schmidt-Schauß, Manfred; Sabel, David: Sharing Decorations for Improvements in a Functional Core Language with Call-By-Need Operational Semantics. Frank report 56, Institut für Informatik. Goethe-Universität Frankfurt am Main, 2015. http://www.ki.cs.uni-frankfurt.de/papers/frank/. [SSSS08] Schmidt-Schauß, Manfred; Schütz, Marko; Sabel, David: Safety of Nöcker’s Strictness Analysis. J. Funct. Programming, 18(04):503–551, 2008. [VPJ13] Vytiniotis, Dimitrios; Peyton Jones, Simon: Evidence Normalization in System FC (In- vited Talk). In (van Raamsdonk, Femke, ed.): Proc. RTA 2013. volume 21 of LIPIcs, Schloss Dagstuhl–Leibniz-Zentrum für Informatik, Dagstuhl, Germany, pp. 20–38, 2013. 90