=Paper=
{{Paper
|id=Vol-3284/8939
|storemode=property
|title=Semantics in Skel and Necro
|pdfUrl=https://ceur-ws.org/Vol-3284/8939.pdf
|volume=Vol-3284
|authors=Louis Noizet,Alan Schmitt
|dblpUrl=https://dblp.org/rec/conf/ictcs/NoizetS22
}}
==Semantics in Skel and Necro==
Semantics in Skel and Necro
Louis Noizet1 , Alan Schmitt2
1
UniversitΓ© de Rennes 1, Bretagne, France
2
INRIA, France
Abstract
We present Skel, a meta language designed to describe the semantics of programming languages, and
Necro, a set of tools to manipulate said descriptions. We show how Skel, although minimal, can faithfully
and formally capture informal specifications. We also show how we can use these descriptions to generate
OCaml interpreters and Coq formalizations of the specified languages.
1. Introduction
To formally prove properties of a programming language, or programs in that language, it is
necessary to have a formal specification of its semantics. We expect the tool and the language
used to describe this formalization to be executable, usable, and easily verifiable, that is, close
to a paper-written specification.
Necro provides a language (Skel) and a set of tools to formalize and interpret the semantics of
programming languages. Necro fulfills these requirements and more. First, Skel is designed to be
light and its semantics simple. A light language facilitates maintainability and the development
of tools. Second, Skel is powerful enough to express intricate semantical features, as proven by
its use in an ongoing formalization of JavaScriptβs semantics [1]. Third, a semantics described in
Skel can be close to a previously written formulation, be it as inference rules or as an algorithmic
specification. Finally, Necro provides a comprehensive and extensible set of tools to manipulate
these semantics. For instance, to translate it into an interpreter (Necro ML, see Section 3.2), or
to give a formalization in the Coq proof assistant (Necro Coq, see Section 3.3).
Skel is a statically strongly typed language. First introduced in [2], we present its redesign
with a syntax close to ML. We also introduce support for polymorphism and higher order,
enabling the use of monads in specifications [1].
Skel can be seen as a specification language or as a way to define inductive rules. Both
approaches are useful, respectively when writing a semantics whose formalization is a set of
algorithms, (e.g. ECMA-262 [3]), or when writing a semantics defined with inference rules (e.g.
π-calculus). Skel can be used to describe arbitrary semantics, including ones with partiality and
non-determinism.
Necro contains several tools to manipulate skeletal semantics (semantics written in Skel).
First, Necro Lib offers basic operations (parsing, typing, printing, and simple transformations),
in the form of a library to write programs that manipulate the AST describing a semantics.
Second, Necro includes the tools Necro ML, Necro Coq, and Necro Debug, which use said
Proceedings of the 23rd Italian Conference on Theoretical Computer Science, Rome, Italy, September 7-9, 2022
Β© 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
library to generate respectively an OCaml interpreter, a Coq formalization, and a step-by-step
debugger.
The contributions of this work are the Skel language, the accompanying tools, and many
examples of semantics available online.1
The paper is organized as follows. Section 2 introduces Skel. Section 3 presents the Necro
ecosystem. Section 4 compares our work to existing approaches. Section 5 concludes the paper.
2. Skeletal Semantics
2.1. Skel by Example
Let us describe how one specifies a semantics in Skel, using the small-step semantics of π-
calculus as example. A skeletal semantics is a list of declarations, either type declarations
or term declarations, where each declaration can be unspecified or specified. When a type is
unspecified, we only give its name. For instance, when writing the semantics of π-calculus, we
might not want to specify how identifiers for variables are internally represented, so we declare
type ident.
A specified type may be a variant type (i.e., an algebraic data type, defined by providing the
list of its constructors together with their input type), a type alias using the := notation, or
a record type (defined by listing its fields and their expected types). For instance, the type of
π-terms would be defined as follows.
type term = | Var ident | Lam (ident, term) | App (term, term)
In this example, (ident, term) is the product type with two components, the first one being
of type ident, and the second of type term.
A record type is declared as type pair = ( left: int , right: int ) and an alias as
type ident := string. Note that types are all implicitly mutually recursive, so the order in
which they are declared does not matter.
We now turn to term declarations. An unspecified term declaration is simply its name and type.
To declare a specified term, we also give its definition. An example of a term we might want to let
unspecified is substitution, so we would declare val subst: ident β term β term β term.
An example of a specified term is the following.
val ss (t:term): term =
match t with
| App (t1, t2) ->
branch
let t1' = ss t1 in
App (t1', t2)
or
let t2' = ss t2 in
App (t1, t2')
or (* beta-reduction of a redex *)
1
https://gitlab.inria.fr/skeletons/necro-test
Term π‘ ::= π₯ | πΆ π‘ | (π‘, . . . , π‘) | ππ : π β π | π‘.π | π‘.π | (π = π‘, . . . , π = π‘) |
π‘ β (π = π‘, . . . , π = π‘)
Skeleton π ::= π‘0 π‘1 . . . π‘π | let π = π in π | let π : π in π |
match π‘ with β|βπ β π . . . β|βπ β π end |
branch π or . . . or π end | ret π‘
Pattern π ::= π₯ | _ | πΆ π | (π, . . . , π) | (π = π, . . . , π = π)
Type π ::= π | π β π | (π, . . . , π )
Term decl ππ‘ ::= val π₯ : π | val π₯ : π = π‘
Type decl ππ ::= type π | type π = β|β πΆ π . . . β|β πΆ π | type π := π |
type π = (π : π, . . . , π : π )
Figure 1: Syntax of Skel (without Polymorphism)
let Lam (x, body) = t1 in
let Lam _ = t2 in (* t2 is a value *)
subst x t2 body (* body[xβt2] *)
end
| _ -> (branch end: term)
end
The branch . . . or . . . end construct is a Skel primitive to deal with non-deterministic
choice, similar to McCarthyβs ambiguous operator [4]. There is no order in a branching, so any
branch which yields a result can be chosen. Overlapping branches provide non-determinism,
and non-exhaustive branches provide partiality. For instance the branch end at the end will
never yield a result, so ss is partially defined.
The destructuring pattern matching let Lam (x, body) = t1 in ... asserts that t1 is a
lambda-abstraction. If it is not, then the considered branch yields no result. If it is, then x and
body will be assigned to the proper values.
The match . . . with . . . end construct is a pattern matching, similar to any other
languageβs pattern matching. In particular, it is deterministic. Only the first (πππ‘π‘πππ, π πππππ‘ππ)
pair for which the pattern corresponds with the matched value is taken.
2.2. Formalism
Figure 1 contains the syntax of Skel terms and skeletons. It does not include polymorphism,
which will be described in Section 2.5. There are two types of expressions: terms and skeletons.
Our syntax is close to Moggiβs computational π-calculus [5] and to Abstract Normal Forms [6]:
we separate what is intuitively an evaluated value to what is a computation. This makes the
evaluation order unambiguous and the manipulation of skeletal semantics simpler.
A term is either a variable, a constructor applied to a term, a (possibly empty) tuple of terms,
a π-abstraction, the access to a given field of a term, the access to a member of a tuple, a record
of terms, or a term with reassignment of some fields. A skeleton is either the application of
Ξ + π₯ β π β Ξ, π₯ : π Ξ+_βπ βΞ
Ξ + πΆ π β π β Ξ + π β π , where ctype(πΆ) = (π, π)
Ξ + (π1 , . . . , ππ ) β (π1 , . . . , ππ ) β ((Ξ + π1 β π1 ) . . . ) + ππ β ππ
Ξ + (π1 = π1 , . . . , ππ = ππ ) β π β ((Ξ + π1 β π1 ) . . . ) + ππ β ππ ,
where fields(π) = {π1 : π1 , . . . , ππ : ππ }
πΈ + π₯ β π£ β πΈ, π₯ : π£ πΈ+_βπ£ βπΈ πΈ+πΆ πβπΆ π£ βπΈ+πβπ£
πΈ + (π1 , . . . , ππ ) β (π£1 , . . . , π£π ) β (πΈ + π1 β π£1 ) . . . + ππ β π£π
πΈ + (π1 = π1 , . . . , ππ = ππ ) β (π1 = π£1 , . . . , ππ = π£π ) β (πΈ + π1 β π£1 ) . . . + ππ β π£π
Figure 2: Pattern Matching of Types and Values
a term to other terms, a let-binding, an existential (see Section 2.4), a branching, a match, or
simply the return of a term. We sometimes omit ret in ret π‘. A pattern is either a variable, a
wildcard, a constructor applied to a pattern, a (possibly empty) tuple of patterns, or a record
pattern. Finally, a type is either a base type (defined by the user), an arrow type, or a (possibly
empty) tuple of types. Term and type declarations have already been described in Section 2.1
2.3. Typing
Skel is a strongly typed language with explicit type annotations. Every term declaration is given
a type, as well as every pattern in a π-abstraction. Polymorphism, presented in Section 2.5, also
uses explicitly specified type arguments. Specifying every type might seem tedious at first, but
it helps to improve confidence on the correctness of the skeletal semantics. A future version of
the typer will include an optional type-inference mechanism.
To give the typing rules for Skel, we first define ctype(πΆ), which returns the pair of the
declared input type and output type for the constructor πΆ in its type declaration. For instance,
we have ctype(Var ) = (ident, term). Similarly, we define ftype(π ) to return (π, π) where π
is the type of the field π , and π is the record type to which the field π belongs. For instance,
we have ftype(left) = (int, pair). Note that a field name may not belong to two different
record types, and a constructor name may not be used twice, so these functions are well-defined.
Finally, we write fields(π) for the fields and types of record type π.
The typing rules for terms and skeletons are straightforward, we give them in Appendix A.
They are respectively of the form Ξ β’π‘ π‘ : π and Ξ β’π π : π , where Ξ is a typing environment (a
partial function from variable names to types). To deal with pattern matching in π-abstractions
and let-bindings, we use the partial function Ξ + π β π defined at the top of Figure 2.
2.4. Concrete Interpretation
As such, Skel is a concrete syntax to describe a programming language. The meaning associated
to a Skel description is called an interpretation. We present in this section a concrete interpreta-
tion, which stands for the usual natural semantics of a language [7]. One may also define an
abstract interpretation, where unspecified types are given values in some abstract domain and
where the results from branches are all collected. Such an interpretation is beyond the scope of
this paper.
Every skeleton and term is interpreted as a value. Given sets of values ππ for each unspecified
type π , we build values for variant and product types in the expected way. For arrow types, we
use closures for the specified functions, and we delay the evaluation of unspecified terms until
we have all arguments. To this end, for each declaration val x : π 1 β . . . β π n β π , we
assume given an arity π = arity(π₯) β N and a relation Jπ₯K β π«(ππ1 Γ Β· Β· Β· Γ πππ Γ ππ ).
The rules for the concrete interpretation are straighforward. They are given in Appendix B.
They are of the form πΈ, π‘ βπ‘ π£ for terms and πΈ, π βπ π£ for skeletons, meaning that in the
environment πΈ (partial function mapping variables to values), the term π‘ or the skeleton π can
evaluate to a value π£.
A construct that is specific to Skel is the existential. To interpret the existential let p:π in sk,
we take any value of type π , and match p to this value, before evaluating the continuation in the
extended environment. This pattern matching, also used for the let-in constructs and closures,
is defined in Figure 2 as the partial function πΈ + π β π£ that returns an extended environment.
The concrete interpretation is relational: a skeleton can be interpreted to 0, 1, or several
values. A branching with no branch has no result, causing partiality. A branching with several
branches can have several results, causing non-determinism. Pattern-matching can fail, causing
partiality. Finally, non-terminating computations also cause partiality.
2.5. Polymorphism and Type Inference
Our type system allows for polymorphism, with explicit type annotations specified using angle
brackets. Any declared and defined type can be polymorphic.
(* Unspecified *) (* Record *)
type list<_> type pair = (left: a, right: b)
(* Variant *) (* Alias *)
type union = | InjL a | InjR b type option := union
Terms defined at toplevel can also be polymorphic, but let-bound terms are necessarily
monomorphic.
val map (f: a β b) (l: list): list =
branch
let Nil = l in
Nil
or
let Cons (a, qa) = l in
let b = f a in
let qb = map f qa in
Cons (b, qb)
end
Necro Lib β necro.cma
IMP
Necro ML OCaml
Necro Coq
JavaScript Skel Necro Trans Coq
Necro Debug
... Debugger
π-calculus
User
Figure 3: The Necro Ecosystem
Type annotations are explicitly given when constructing a term (e.g., Nil) or when using
a polymorphic term (e.g., map f qa), but they can be locally inferred in patterns, so they
are not specified in them (e.g., let Nil = l in . . . ).
The explicit typing is by design, as explicit type annotations reduce the risk of error. In the
future, we will add an option to perform type inference. This option could also infer the type of
the arguments in constructs of the form πp:π β sk.
3. The Necro Ecosystem
Necro is an ecosystem with several tools to perform different operations on skeletal semantics,
as illustrated in Figure 3.
3.1. Necro Lib
Necro Lib [8] is an OCaml library file, necro.cma. It provides a parser and a typer, in order to
transform a Skel file into a Skel AST. The AST is described in the file main/skeltypes.mli
of the repository. It also contains a pretty-printer, which displays the AST in the format of a
Skel file; a set of transformers, along with the tool Necro Trans that calls the transformers on an
AST and prints the result; and a set of utility functions to manipulate the AST. The necro.cma
file is the basis for Necro ML (Section 3.2) and Necro Coq (Section 3.3).
3.2. Necro ML
Necro ML [9] is a generator of OCaml interpreters. Given a skeletal semantics, it produces an
OCaml functor. This functor expects as arguments an OCaml specification of all unspecified
types and terms, it then provides an interpreter that can compute any given skeleton. Skel
cannot be shallowly embedded in OCaml, since OCaml does not have an operator fitting
the branching construct (pattern matching is deterministic in OCaml). So types and terms are
shallowly embedded, but skeletons are deeply embedded. We use an interpretation monad, which
specifies, amongst other things, how skeletons are represented, and how let in s, applications
and branch es are computed.
3.2.1. OCaml Interpreter
When Necro ML is executed on a skeletal semantics, it generates an OCaml file, which contains
several modules, functors, and module types. To create an interpreter, one needs to instan-
tiate them, in order to indicate how unspecified types and terms are interpreted, and which
interpretation monad is chosen.
Working examples can be found in the test folder of Necro MLβs repository.
3.2.2. Interpretation Monad
The interpretation monad is defined as follows, where terms are assumed to be pure while
skeletons are monadic (of type 'a t).
module type MONAD = sig
type 'a t
val ret: 'a -> 'a t
val bind: 'a t -> ('a -> 'b t) -> 'b t
val branch: (unit -> 'a t) list -> 'a t
val fail: string -> 'a t
val apply: ('a -> 'b t) -> 'a -> 'b t
val extract: 'a t -> 'a
end
The fail operator takes a string as input which is an error message that should be raised,
and the extract operator is a usability construct to extract a result from the monad, typically
to display it.
There are several proposed ways to instantiate this monad, and the user can also define their
own. The standard identity monad, which is closest to a shallow embedding, tries each branch
in turn. But it chooses the first branch that succeeds, with no ability to backtrack, which can be
problematic in some cases. A more interesting one is the continuation monad, which keeps as a
failure continuation the branches not taken, and can then backtrack if need be [10].
module ContPoly = struct
type 'b fcont = string -> 'b
type ('a,'b) cont = 'a -> 'b fcont -> 'b
type 'a t = { cont: 'b. (('a,'b) cont -> 'b fcont -> 'b) }
let ret (x: 'a) = { cont = fun k fcont -> k x fcont }
let bind (x: 'a t) (f: 'a -> 'b t) : 'b t =
{ cont = fun k fcont -> x.cont (fun v fcont' -> (f v).cont k fcont') fcont }
let fail s = { cont = fun k fcont -> fcont s }
let rec branch l = { cont = fun k fcont ->
begin match l with
| [] -> fcont "No branch matches"
| b :: bs -> (b ()).cont k (fun _ -> (branch bs).cont k fcont)
end}
let apply f x = f x
let extract x = x.cont (fun a _ -> a) (fun s -> failwith s)
end
Nevertheless, the continuation monad is not complete w.r.t the concrete interpretation, as
it always executes the first branch first, and can therefore be caught in an infinite loop. The
following function is an example thereof.
val loop (_:()): () = branch loop () or () end
To fix this issue, we propose another interpretation monad called BFS, which does one step
in each branch, until it gets a result. The file containing all these monads and some others is
available online.2 Note that to change the interpretation monad, the user simply has to swap it
at module instantiation, no code needs to be rewritten.
3.3. Necro Coq
Necro Coq [11] is a tool to embed a given skeletal semantics into a Coq formalization. It can
then be used to prove language properties or correctness of a given program.
3.3.1. Structure
Necro Coq uses a deep embedding of Skel. The embedding of Skel is defined in the file
files/Skeleton.v, which is presented in Section 3.3.2. The command necrocoq file.sk
provides a Coq file that contains the AST of the original skeletal semantics. To give meaning to
the skeletons, we then provide a file that defines the concrete interpretation for skeletons and
terms.
3.3.2. Skelβs Embedding
The embedding is a straightforward deep embedding. It defines a number of variables, which
are the data of a given skeletal semantics (its constructors, base types, . . . ), and provides the
basic constructs (the definition of a type, a skeleton, . . . ).
3.3.3. Values
The values that are the result of the evaluation of a term or skeleton are deeply embedded as
well. The Coq type which supports values is defined with five constructors. Here are the first
four.
Inductive cvalue : Type :=
| cval_base : forall A, A -> cvalue
| cval_constructor : constr -> cvalue -> cvalue
| cval_tuple: list cvalue -> cvalue
| cval_closure: pattern -> skeleton -> list (string * cvalue) -> cvalue.
2
https://gitlab.inria.fr/skeletons/necro-ml/-/blob/master/necromonads.ml
The cval_base constructor allows values of an unspecified type to be represented in Coq
with any given type A. For instance, the value 1, in an unspecified type int, could be stored as
cval_base Z 1. The next two constructors are straightforward. The fourth one is a closure
constructor, to store π-abstractions. The three arguments of the constructor are the bound
pattern, the skeleton to evaluate, and the current environment. The fifth constructor is used for
unspecified functional terms, it will be presented below.
3.3.4. Interpretation
The file Concrete.v3 provides the concrete interpretation for skeletons. It uses Coqβs induction
to define the relations interp_skel and interp_term, which relate respectively skeletons and
terms in a given environment to their possible interpretations as a value. It mostly follows the
semantics of Appendix B. For instance, this is the rule for a let in construct:
Inductive interp_skel: env -> skeleton -> cvalue -> Prop :=
| i_letin: forall e e' p s1 s2 v w,
interp_skel e s1 v ->
add_asn e p v e' ->
interp_skel e' s2 w ->
interp_skel e (skel_letin p s1 s2) w
The add_asn e p v e' proposition states that the environment π can be extended into πβ² by
mapping π to π£, that is πβ² = π + π β π£
The file Concrete.v defines the interpretation using big-step evaluation, but we also provide
a file Concrete_ss.v which does a small-step evaluation. An alternative is ConcreteRec.v,
which defines interpretation from the bottom up. That is, instead of using Coqβs induction, it
only defines how to do one step, which doesnβt use recursive calls, and then one may iterate this
step. It is closest to the initial definition in [2]. The purpose of this file is to be able to perform a
strong induction on interp_skel in a simple way.
These interpretations are proven (in Coq) to be equivalent, so one can use indifferently one
or the other, and one may even switch between several of them depending on what is more
useful at the time. The big step definition is usually the easiest to use. The small step one allows
to reason about non-terminating behaviors, and it provides a simple way to prove the subject
reduction of Skel (see below). The iterative one allows, as we mentioned, to perform a strong
induction. An instance where we need to use this one is to prove an induction property on
the semantics of a lambda calculus.4 As Skel is deeply embedded, one evaluation step in the
language corresponds to several steps in Necro Coq. Because of this, the inductive interpretation
is not convenient to prove that property, whereas this is much simpler using the iterative version
with a strong induction on the height of the derivation tree.
As Skel is strongly typed, we also have a file WellFormed.v to check that a term or skeleton
is well-formed, i.e., that it can be typed. We prove that the concrete interpretation respects the
subject reduction property with regards to well-formedness. Since interpretations are all shown
to be equivalent, it suffices to only prove it for Concrete_ss.v:
3
https://gitlab.inria.fr/skeletons/necro-coq/-/blob/master/files/Concrete.v
4
https://gitlab.inria.fr/skeletons/necro-coq/-/blob/master/test/lambda/EvalInd.v#L55
This translates roughly to:
Theorem subject_reduction_skel:
forall sk sk' ty,
π:π π β πβ²
type_ext_skel sk ty ->
interp_skel_ss sk sk' -> πβ² : π
type_ext_skel sk' ty.
where S and Sβ are extended skeletons.
With this proven, and since Concrete_ss.v and Concrete.v are equivalent, we have:
β
β’π:π π βπ π£
π£ β ππ
Finally, interpretations in the form of abstract machines have been defined in [12]. They
reuse the deep embedding provided by Necro Coq and are proven correct in relation to the
concrete interpretation.
3.3.5. Unspecified Functional Values
Since Skel allows to declare unspecified terms, we must be able to interpret them. The natural
idea would be to ask for a cvalue for each given unspecified term. But for unspecified functions
(like the addition), that would mean giving a closure, which is equivalent to specifying the
function. We would lose Skelβs power of partial specification. Instead, we ask for a relation that,
given a list of cvalues as arguments, provides the result of the application of the term to the
arguments. For instance, for the addition, it would be {([π₯; π¦], π₯ + π¦) | π₯, π¦ β N}.
There are operational and denotational approaches to represent this in Coq. We choose the
operation approach as it does not need to evaluate ahead of time: it just waits to be applied to
enough arguments before computing. We thus use the following constructor, which denotes an
unspecified functional term that is not fully applied yet.
| cval_unspec: nat -> unspec_value -> list type -> list cvalue -> cvalue.
The first nat argument being n means that there are S n arguments missing. We add one,
because there cannot be 0 argument missing, since if there is 0 argument missing, it is not a
partial application. The list of types is the type annotation for polymorphic unspecified terms,
the list of values is the list of arguments that have already been provided.
3.3.6. Applications and Usability
We have considered several applications of Necro Coq, some of them can be found in the test
folder of the repository. For instance, we have proved the correctness of an IMP code that
computes the factorial function (test/certif folder). Necro Coq has also been used to prove
the equivalence between a small-step and a big-step semantics for a π-calculus extended with
natural numbers (test/lambda folder). In addition, it has been used in [13] to provide the a
posteriori proof of an automatic generic transformation of a big step semantics into a small step
semantics.
4. Related Work
We review existing approaches that are generic, in the sense that they can be used to describe
and manipulate any semantics.
Our work is an extension of the work undertaken in [2]. We significantly improve on this
approach by having a more expressive language (with higher-order functions and polymorphism)
and a set of tools to manipulate skeletal semantics. The Coq formalizations of [2] were written by
hand, they can now be automatically generated. Generation of OCaml code was also proposed
in [14], but it was only available for the language of [2], which was less powerful than the
current language. It also did not consider interpretation monads.
Regarding meta-languages to describe semantics, existing tools are much more complex than
Skel. This is the case of Lem [15] and Ott [16]. The simplicity of Skel (the file skeltypes.mli
describing Skelβs AST is only 114 lines of specification) allows anyone to easily write a tool
handling skeletal semantics. This is less immediate with Lem and Ott, as one has to deal with
many additional features. For instance, Lem natively defines set comprehension, relations, and
maps. Also, Coq generation is done as a shallow embedding, hence functions must be proven
to terminate. In addition, shallow embedding of large semantics are not easily manipulated in
Coq, due to the space complexity of the induction and inversion tactics.
The K framework [17] also allows to formally define the semantics of a language and prove
programs, and it is designed to be easy to use. It does not allow, however, to prove meta-theory
properties of a language5 , which is one of our future goals. Furthermore, there are no Coq
backend for K at the present time, and since K is a large language, writing new backends is far
from trivial.
Finally, another common way to describe a semantics is to implement it both in OCaml and in
Coq, or other similar tools, either directly or through tools to transform them (Coq extraction to
OCaml or coq-of-ocaml [18] to go the other way). One may then execute the semantics using
the OCaml version and prove properties using the Coq one. In this case, the Coq formalization
is simpler to manipulate, but changing design choices (such as going from a shallow to a deep
embedding) is very costly, as OCaml or Coq AST are not easy to manipulate.
5. Conclusion
Skel offers a way to specify semantics of programming languages, using a language light enough
to be easily readable and maintainable, yet powerful enough to express many semantical features.
We have focused on dynamic semantics, but one may also describe static semantics in Skel.
The tools Necro provide, such as Necro ML, help in the process of writing a semantics. Necro
Coq allows to manipulate and certify these semantics once written. They give the necessary
framework to prove program correctness and language properties.
Skel has been used to write the semantics of a set of basic languages such as IMP, but it has
also been used to formalize more massive languages, such as WASM (unpublished), and an
ongoing formalization of JavaScript [1]. We plan to write a formalization of Python based on an
existing precise description [19].
5
https://sympa.inria.fr/sympa/arc/coq-club/2020-02/msg00066.html
The Necro ecosystem includes other tools, such as Necro Debug,6 which is a step-by-step
execution of a semantics. An example execution can be found online.7 . In addition, people can
easily produce a new tool using Necro Lib.
Although not mentioned in this paper, Skel allows to split the definition of a semantics in
several files, and to access them using an include construct. The OCaml generation tool can
handle these multi-file semantics using modules, but at the moment this is not the case for
Necro Coq. A future task is to implement this functionality using Coq modules. Once this has
been done, the logical next step is to implement a standard library for Skel, defining basic types
like lists, with properties on these types proven using Necro Coq. Initial work on this standard
library can be found online.8
Finally, Skel and Necro are currently being used to describe semantics style transformations,
both at the object level [13] and at the meta-language level [12]. Work has also started to
automatically derive control flow analyses from a language description.
6
https://gitlab.inria.fr/skeletons/necro-debug
7
https://skeletons.inria.fr/debugger/index_while.html
8
https://gitlab.inria.fr/skeletons/necro/-/tree/master/examples/necro_in_necro
References
[1] A. Khayam, L. Noizet, A. Schmitt, Jskel: Towards a formalization of javascriptβs semantics,
in: JLFA 2021 - JournΓ©es Francophones des Langages Applicatifs, 2021.
[2] M. Bodin, P. Gardner, T. Jensen, A. Schmitt, Skeletal Semantics and their Interpretations,
Proceedings of the ACM on Programming Languages 44 (2019) 1β31. URL: https://hal.inria.
fr/hal-01881863. doi:10.1145/3290357.
[3] TC39, ECMA-262, https://262.ecma-international.org/, ???? URL: https:
//262.ecma-international.org/.
[4] J. McCarthy, A basis for a mathematical theory of computation, in: P. Braffort,
D. Hirschberg (Eds.), Computer Programming and Formal Systems, volume 26 of Stud-
ies in Logic and the Foundations of Mathematics, Elsevier, 1959, pp. 33β70. URL: https://
www.sciencedirect.com/science/article/pii/S0049237X09700990. doi:https://doi.org/
10.1016/S0049-237X(09)70099-0.
[5] E. Moggi, Computational lambda-calculus and monads, Proceedings. Fourth Annual
Symposium on Logic in Computer Science (1988).
[6] A. Sabry, M. Felleisen, Reasoning about programs in continuation-passing style, in: LISP
AND SYMBOLIC COMPUTATION, 1993, pp. 288β298.
[7] G. Kahn, Natural semantics, in: F. Brandenburg, G. Vidal-Naquet, M. Wirsing (Eds.),
STACS 87, 4th Annual Symposium on Theoretical Aspects of Computer Science, Passau,
Germany, February 19-21, 1987, Proceedings, volume 247 of Lecture Notes in Computer
Science, Springer, 1987, pp. 22β39. URL: https://doi.org/10.1007/BFb0039592. doi:10.1007/
BFb0039592.
[8] L. Noizet, Necro Library, https://gitlab.inria.fr/skeletons/necro, ???? URL: https://gitlab.
inria.fr/skeletons/necro.
[9] E. C. Martin Bodin, NathanaΓ«lle Courant, L. Noizet, Necro Ocaml Generator,
https://gitlab.inria.fr/skeletons/necro-ml, ???? URL: https://gitlab.inria.fr/skeletons/
necro-ml.
[10] O. Danvy, A. Filinski, Abstracting control, in: Proceedings of the 1990 ACM Conference
on LISP and Functional Programming, LFP β90, Association for Computing Machinery,
New York, NY, USA, 1990, p. 151β160. URL: https://doi.org/10.1145/91556.91622. doi:10.
1145/91556.91622.
[11] L. Noizet, Necro Gallina Generator, https://gitlab.inria.fr/skeletons/necro-coq, ???? URL:
https://gitlab.inria.fr/skeletons/necro-coq.
[12] G. Ambal, S. Lenglet, A. Schmitt, Certified Abstract Machines for Skeletal Semantics, in:
Certified Programs and Proofs, Philadelphia, United States, 2022.
[13] G. Ambal, A. Schmitt, S. Lenglet, Automatic Transformation of a Big-Step Skeletal Seman-
tics into Small-Step, Research Report RR-9363, Inria Rennes - Bretagne Atlantique, 2020.
URL: https://hal.inria.fr/hal-02946930.
[14] N. Courant, E. Crance, A. Schmitt, Necro: Animating Skeletons, in: ML 2019, Berlin,
Germany, 2019.
[15] D. Mulligan, S. Owens, K. Gray, T. Ridge, P. Sewell, Lem: Reusable engineering of real-world
semantics, ACM SIGPLAN Notices 49 (2014). doi:10.1145/2628136.2628143.
[16] P. Sewell, F. Zappa Nardelli, S. Owens, G. Peskine, T. Ridge, S. Sarkar, R. Strnisa, Ott:
Effective tool support for the working semanticist, J. Funct. Program. 20 (2010) 71β122.
doi:10.1017/S0956796809990293.
[17] L. Li, E. L. Gunter, IsaK: A Complete Semantics of K, Technical Report, Computer Science,
Univ. of Illinois Urbana-Champaign, 2018.
[18] N. Labs, coq-of-ocaml, https://clarus.github.io/coq-of-ocaml, ???? URL: https://clarus.
github.io/coq-of-ocaml.
[19] R. Monat, Static Type and Value Analysis by Abstract Interpretation of Python Programs
with Native C Libraries, Ph.D. thesis, Sorbonne UniversitΓ©, 2021.
A. Typing Rules for Skeletal Semantics
Ξ(π₯) = π val x:π val x:π = t
Var TermUnspec TermSpec
Ξ β’π‘ π₯ : π Ξ β’π‘ π₯ : π Ξ β’π‘ π₯ : π
Ξ β’π‘ π‘ : π ctype(C) = (π, π β² ) Ξ β’π‘ π‘1 : π1 ... Ξ β’π‘ π‘π : ππ
β² Const Tuple
Ξ β’π‘ C π‘ : π Ξ β’π‘ (π‘1 , . . . , π‘π ) : (π1 , . . . , ππ )
Ξ + π β π β’π π : π β² Ξ β’π‘ π‘ : π ftype(π ) = (π, π)
β² Clos FieldGet
Ξ β’π‘ (ππ : π β π) : π β π Ξ β’π‘ π‘.π : π
Ξ β’π‘ π‘ : (π1 , . . . , ππ ) 1β€πβ€π
TupleGet
Ξ β’π‘ π‘.π : ππ
fields(π ) = {π1 : π1 , . . . , ππ : ππ } Ξ β’π‘ π‘1 : π1 ... Ξ β’π‘ π‘π : ππ
Rec
Ξ β’π‘ (π1 = π‘1 , . . . , ππ = π‘π ) : π
Ξ β’π‘ π‘ : π βπ β J1; πK, Ξ β’π‘ π‘π : ππ βπ β J1; πK, ftype ππ = (ππ , π )
FieldSet
Ξ β’π‘ π‘ β (π1 = π‘1 , . . . , ππ = π‘π ) : π
Ξ β’π‘ π‘ : π Ξ β’π π1 : π ... Ξ β’π ππ : π
Ret Branch
Ξ β’π ret π‘ : π Ξ β’π (π1 . . . ππ ) : π
Ξ β’π π : π Ξ + π β π β’π π β² : π β² Ξ + π β π β’π π : π β²
LetIn Exist
Ξ β’π let π = π in π β² : π β² Ξ β’π let π : π in π : π β²
Ξ β’π‘ π‘ : π Ξ + π1 β π β’π π1 : π ... Ξ + ππ β π β’π ππ : π
Match
Ξ β’π match π‘ with |π1 β π1 |. . . |ππ β ππ : π
Ξ β’ π‘0 : π1 β Β· Β· Β· β ππ β π Ξ β’π‘ π‘1 : π1 ... Ξ β’π‘ π‘π : ππ
App
Ξ β’π (π‘0 π‘1 . . . π‘π ) : π
B. Concrete Interpretation of Skeletal Semantics
First we defined rules to handle environments. We define the relation π ΜΈβ¦β π£ which holds if π
cannot represent the value π£, and the partial function πΈ + π£ match(π1 , . . . , ππ ) as follows:
π ΜΈβ¦β π£ πΆ ΜΈ= πΆ β²
NoMatchSameConstr NoMatchDiffConstr
πΆπ ΜΈβ¦β πΆπ£ πΆπ ΜΈβ¦β πΆ β² π£
ππ ΜΈβ¦β π£π 0β€πβ€π
NoMatchTuple
(π1 , . . . , ππ ) ΜΈβ¦β (π£1 , . . . , π£π )
ππ ΜΈβ¦β π£π 0β€πβ€πβ€π
NoMatchRecord
(π1 = π1 , . . . , ππ = ππ ) ΜΈβ¦β (π1 = π£1 , . . . , ππ = π£π )
πΈ + π1 β π£ = πΈ β²
GetMatchFirst
πΈ + π£ match(π1 , . . . , ππ ) = (1, πΈ β² )
π1 ΜΈβ¦β π£ πΈ + π£ match(π2 , . . . , ππ ) = (π, πΈ β² )
GetMatchNext
πΈ + π£ match(π1 , . . . , ππ ) = (π + 1, πΈ β² )
Now we define the rules for concrete interpretation:
πΈ(π₯) = π£ val x : π arity(π₯) = 0 Jπ₯K(π£)
Var TermUnspecZero
πΈ, π₯ βπ‘ π£ πΈ, π₯ βπ‘ π£
val x : π arity(π₯) > 0 val x:π = t π, π‘ βπ‘ π£
TermUnspecSucc TermSpec
πΈ, π₯ βπ‘ βπ₯, ()β πΈ, π₯ βπ‘ π£
πΈ, π‘ βπ‘ π£ πΈ, π‘1 βπ‘ π£1 ... πΈ, π‘π βπ‘ π£π
Const Tuple
πΈ, (C π‘) βπ‘ C π£ πΈ, (π‘1 , . . . , π‘π ) βπ‘ (π£1 , . . . , π£π )
πΈ, π‘ βπ‘ (π1 = π£1 , . . . , ππ = π£π )
Clos FieldGet
πΈ, (ππ : π β π) βπ‘ β¨π, πΈ, πβ© πΈ, π‘.ππ βπ‘ π£π
πΈ, π‘ βπ‘ (π£1 , . . . , π£π ) 1β€πβ€π
TupleGet
πΈ, π‘.π βπ‘ π£π
πΈ, π‘1 βπ‘ π£1 ... πΈ, π‘π βπ‘ π£π
Rec
πΈ, (π1 = π‘1 , . . . , ππ = π‘π ) βπ‘ (π1 = π£π , . . . , ππ = π£π )
πΈ, π‘ βπ‘ (π1 = π£1 , . . . , ππ = π£π ) βπ β J1; πK, πΈ, π‘π βπ‘ π€ππ
βπ β J1; πK β {π1 , . . . , ππ }, π€π = π£π πΈ, π‘ βπ‘ π£
FieldSet Ret
πΈ, π‘ β (ππ1 = π‘1 , . . . , πππ = π‘π ) βπ‘ (π1 = π€1 , . . . , ππ = π€π ) πΈ, ret π‘ βπ π£
πΈ, ππ βπ π£ 1β€πβ€π πΈ, π βπ π£ πΈ + π β π£, π β² βπ π€
Branch LetIn
πΈ, (π1 . . . ππ ) βπ π£ πΈ, let π = π in π β² βπ π€
π£ β ππ πΈ + π β π£, π βπ π€
Exist
πΈ, let π : π in π βπ π€
πΈ, π‘ βπ‘ π£ πΈ + π£ match π1 . . . ππ = (π, πΈ β² ) πΈ β² , ππ βπ π€
Match
πΈ, match π‘ with |π1 β π1 | . . . |ππ β ππ end βπ π€
πΈ, π‘0 βπ‘ π πΈ, π‘1 βπ‘ π£1 ... πΈ, π‘π βπ‘ π£π π π£1 . . . π£π βapp π€
App
πΈ, (π‘0 π‘1 . . . π‘π ) βπ π€
πΈ + π β π£1 , π βπ π π π£2 . . . π£π βapp π€
AppZero AppClos
π£ βapp π£ β¨π, πΈ, πβ© π£1 . . . π£π βapp π€
arity(π₯) > π
AppUnspecNext
βπ₯, (π£1 , . . . , π£π )β π£π+1 . . . π£π βapp βπ₯, (π£1 , . . . , π£π )β
arity(π₯) = π β€ π Jπ₯K(π£1 , . . . , π£π , π) π π£π+1 . . . π£π βapp π€
AppUnspecCont
βπ₯, (π£1 , . . . , π£π )β π£π+1 . . . π£π βapp π€