=Paper= {{Paper |id=Vol-2307/WiP2 |storemode=property |title=Towards an Implementation in LambdaProlog of the Two Level Minimalist Foundation (short paper) |pdfUrl=https://ceur-ws.org/Vol-2307/WiP2.pdf |volume=Vol-2307 |authors=Alberto Fiori,Claudio Sacerdoti Coen |dblpUrl=https://dblp.org/rec/conf/mkm/FioriC18 }} ==Towards an Implementation in LambdaProlog of the Two Level Minimalist Foundation (short paper)== https://ceur-ws.org/Vol-2307/WiP2.pdf
   Towards an Implementation in LambdaProlog of the Two Level
                    Minimalist Foundation

                                     Alberto Fiori                                Claudio Sacerdoti Coen
                              alberto.fiori92@gmail.com                           University of Bologna
                                                                                   sacerdot@cs.unibo.it




                                                                   Abstract
                           In this work we will present an implementation of the Minimalist Foundation
                           [MS05; Mai09] using the πœ†Prolog [NM88] implementation ELPI [Dun+15].
                           The Minimalist Foundation is a two level constructive type theory with both
                           an extensional level with quotients and sets and an intensional level satisfy-
                           ing the proof-as-programs interpretation. Linking the two level there is an
                           interpretation of the extensional level in a dependent setoid model build upon
                           the intensional level where extensionality is modeled by intensional proposi-
                           tions. In our work we have implemented type checkers for both levels and the
                           translation for a significant subset of well-typed extensional terms and types
                           to well-typed intensional terms and types. Future works include: formal ver-
                           ification in Abella for our implementation, code extraction at the intensional
                           level and finally a proof assistant at the extensional level (using the constraint
                           programming functionality offered by ELPI).




Copyright Β© by the paper’s authors. Copying permitted for private and academic purposes.
In: O. Hasan, F. Rabe (eds.): Proceedings of the CICM 2018, Hagenberg, Austria, 13-08-2018, published at http://ceur-ws.org
Introduction
In 2005 M. Maietti and G. Sambin [MS05] argued about the necessity of building a foundation for constructive mathematics
to be taken as a common core among relevant existing foundations in axiomatic set theory, such as Aczel-Myhill’s CZF
and Mizar’s Tarski-Grothendieck set theory, or in category theory, such as the internal theory of a topos, or in type theory,
such as Martin-LΓΆf’s type theory and Coquand’s Calculus of Inductive Constructions. Moreover, they asked the foundation
to satisfy the β€œproofs-as-programs” paradigm, namely the existence of a realizability model where to extract programs
from proofs. Finally, the authors wanted the theory to be appealing to standard mathematicians and therefore they wanted
extensionality in the theory, e.g. to reason with quotient types and to avoid the intricacies of dependent types in intensional
theories.
   In the same paper they noticed that theories satisfying extensional properties, like extensionality of functions, cannot
satisfy the proofs-as-programs requirement. Therefore they concluded that in a proofs-as-programs theory one can only
represent extensional concepts by modelling them via intensional ones in a suitable way, as already partially shown in some
categorical contexts.
   Finally, they ended up proposing a constructive foundation for mathematics equipped with two levels: an intensional
level that acts as a programming language and is the actual proofs-as-programs theory, and an extensional level that acts as
the set theory where to formalize mathematical proofs. Then, the constructivity of the whole foundation relies on the fact
that the extensional level must be implemented over the intensional level, but not only this. Indeed, following Sambin’s
forget-restore principle, they also required that extensional concepts must be abstractions of intensional ones as result of
forgetting irrelevant computational information. Such information is then restored when extensional concepts are translated
back at the intensional level.
   In 2009 M. Maietti [Mai09] presented the syntax and judgements of the two levels, together with a proof that a suit-
able completion of the intentional level provides a model of the extensional one. The proof is constructive and based on
a sequence of categorical constructions, the most important being the construction of a quotient model within the inten-
sional level, where setoids are used to encode extensional equality in an intensional language, and a notion of canonical
isomorphism between intensional dependent setoids.
   In this work we will present an implementation of the following software components:

    1. a type checker for the intensional level

    2. a reformulation of the extensional level that allows to store sintactically proof objects that are later used to provide the
       information to be restored when going from the extensional to the intensional level so to avoid general proof search
       during the interpretation

    3. a type checker for the obtained extensional level

    4. a translator from well-typed extensional terms and types to well-typed intensional terms and types

     As a future extension we hope to implement also other software components:

    1. a formal validation (in Abella) of our reformulation of the extensional level and of the correctness of the implementation

    2. code extraction at the intensional level (in Haskell/Lisp)

    3. a proof assistant at the extensional level (using the constraint programming functionality offered by ELPI)

Combining the translator with a proof extraction component it will be possible to extract programs from proofs written in
the extensional level.
   We have chosen πœ†Prolog [NM88] (and in particular its recent implementation ELPI [Dun+15]) as the programming
language to write the two type checkers and the interpreter. The benefits are that πœ†Prolog takes care of the intricacies of
dealing with binders and alpha-conversion and moreover a πœ†Prolog implementation of a syntax-directed judgement is just
made of simple clauses that are almost literal translations of the judgemental rules. This allows humans (and logicians in
particular) to easily inspect the code to spot possible errors.

1     Higher Order Logic Programming
At the beginning of our work it was decided that the this project should be implemented using πœ†Prolog and ELPI, one of
its implementations which allows for more flexibility (e.g. optional type declarations) more functionality (e.g. constraint
programming, which will be useful in later phases of the project) and also improved performances compared to other major
implementations of πœ†Prolog.
   As a programming language πœ†Prolog is naturally a good fit for a binder-heavy project like ours as it natively offer all
the functionality required to handle 𝛼-equivalence, capture avoiding substitutions and higher-order unification; moreover
the high level abstractions and backtracking allow for almost a 1-1 encoding of inference rules. As an example below we
can see the comparison between a natural deduction inference rule and a πœ†Prolog clause in the case of Ξ -Introduction. In
natural deduction style we have the following inference rule
                                                𝐢(π‘₯) 𝑠𝑒𝑑 [π‘₯ ∈ 𝐡]       𝐡 𝑠𝑒𝑑
                                                                               Ξ -F
                                                       Ξ π‘₯∈𝐡 𝐢(π‘₯) 𝑠𝑒𝑑
while in πœ†Prolog we would write
ofType (setPi B C) KIND3 IE
    :- ofType B KIND1 IE
    , (pi x\ locDecl x B
        => ofType (C x) KIND2 IE)
    , pts_fun KIND1 KIND2 KIND3
    .
where ofType represent the 𝑠𝑒𝑑 judgement, locDecl is used to locally introduce hypothesis in the context and pts_fun is
used to decide whether the produced type is a set or a collection.
   While the πœ†Prolog version is more verbose it should be noted that this one rule encode the Ξ -Formation rule for both the
extensional and the intensional level and also for all combination of arguments kinds between sets, collections, propositions
and small proposition.

1.1     Changes to the calculus
As pointed out in the introduction before we could begin the implementation we had to apply some changes to the calculi
exposed in [Mai09].

1.1.1    Syntax Directed Rules
Between the rules of mTT and emTT there is a rule which states that equal type have the same elements, that is:
                                                     π‘Žβˆˆπ΄ 𝐴=𝐡
                                                             conv
                                                       π‘Žβˆˆπ΅
   This rule is problematic from an implementation point of view since it can be triggered at any moment and (at the
extensional level) would start a proof search trying to prove 𝐴 = 𝐡 for a unknown 𝐴. A simple solution is to preemptively
compose this rule to all other rule that might need it; for example we can see that the elimination rule for dependents
products changes from
                                                𝑓 ∈ Ξ π‘₯∈𝐡 𝐢(π‘₯)      π‘‘βˆˆπ΅
                                                                         Ξ -E
                                                   𝖺𝗉𝗉𝗅𝗒(𝑓 , 𝑑) ∈ 𝐢(𝑑)
to
                                         𝑓 ∈ Ξ π‘₯∈𝐡 𝐢(π‘₯)        𝑑 ∈ 𝐡′   𝐡 = 𝐡′
                                                                              Ξ -E
                                                   𝖺𝗉𝗉𝗅𝗒(𝑓 , 𝑑) ∈ 𝐢(𝑑)
the main difference being that in the syntax directed version we check that the domain type of 𝑓 and the inferred type of
𝑑 are convertible instead of requiring syntactic equality like in the original rule. In this way the power of the calculus is
surely not increased as the syntax directed rule is admissible in the calculus with the two previous rules, indeed

                                                             𝑑 ∈ 𝐡′  𝐡 = 𝐡′
                                                                            conv
                                          𝑓 ∈ Ξ π‘₯∈𝐡 𝐢(π‘₯)            π‘‘βˆˆπ΅
                                                                       Ξ -E
                                               𝖺𝗉𝗉𝗅𝗒(𝑓 , 𝑑) ∈ 𝐢(𝑑)
is a derivation of the same judgement from the same hypothesis.
    For this change to be effective we also need to reformulate the conversion rules given by the calculi (e.g. 𝖺𝗉𝗉𝗅𝗒(πœ†π΅ π‘₯.𝑓 , 𝑑) =
𝑓 {π‘‘βˆ•π‘₯}) in directed rules describing a reduction predicate ⊳ (e.g. 𝖺𝗉𝗉𝗅𝗒(πœ†π΅ π‘₯.𝑓 , 𝑑) ⊳ 𝑓 {π‘‘βˆ•π‘₯}) from which we will define
equality as reducibility to a common term, which is equivalent to the original equality given that both calculi of the Min-
imalist Foundation have both the property of Church-Rosser confluence and strong normalization. This new relation will
be implemented by the predicate conv.
   The full modifications to the calculi in our implementation with respect to making the rule syntax directed are mostly
about checking for convertibility/equality where mTT/emTT require syntactical equality. We plan to provide a full proof of
the validity of this modification, and also of the followings, in Abella in the future.

1.1.2   Informative Proof-terms at the Extensional Level and
        Extensional Equality
                        π‘βˆˆπ΅        π—π—‹π—Žπ–Ύ ∈ 𝐢(𝑏)     𝐢(π‘₯) π‘π‘Ÿπ‘œπ‘ [π‘₯ ∈ 𝐡]          π—π—‹π—Žπ–Ύ ∈ π–€π—Š(𝐴, π‘Ž, 𝑏)
                                                                        βˆƒ-I                        Eq-E
                                       π—π—‹π—Žπ–Ύ ∈ βˆƒπ‘₯∈𝐡 𝐢(π‘₯)                           π‘Ž=π‘βˆˆπ΄
In the calculus emTT a token proof term π—π—‹π—Žπ–Ύ is used to represent all proofs of propositions and small propositions; this poses
problems during the interpretation when we need to construct intensional proof terms given only the token π—π—‹π—Žπ–Ύ. Moreover
the strong elimination rule for the propositional extensional equality turns every conversion in an undecidable problem of
general proof search for π—π—‹π—Žπ–Ύ ∈ π–€π—Š(𝐴, π‘Ž, 𝑏). To reasonably implement the extensional calculus and the interpretation we
need a way to construct intensional proofs from extensional proof and a way to contain the undecidability of the strong
elimination rule for the extensional propositional equality.
    A first approach to the problem could have been to have both the extensional type checker and the interpretation work
on full derivations; this is the method chosen by Maietti in [Mai09] and would solve the problems regarding the missing
informations in the extensional calculus and language, but was discarded as a solution because from an implementation
standpoint the code to encode a derivation would be almost a copy of the code needed to encode terms, yet with subtle, but
non trivial, differences.
    The chosen solution was to both add informative proof terms to the extensional level and weaken the elimination rule of
the propositional extensional equality. The first part of the solution makes the extensional rule for proposition more similar
to the intensional ones
                                       π‘βˆˆπ΅        𝑝 ∈ 𝐢(𝑏) 𝐢(π‘₯) π‘π‘Ÿπ‘œπ‘ [π‘₯ ∈ 𝐡]
                                                                                    βˆƒ-I
                                                  < 𝑏,βˆƒ 𝑝 >∈ βˆƒπ‘₯∈𝐡 𝐢(π‘₯)
so that during the interpretation more informations are available.
   The second part modifies the extensional propositional equality elimination rule to a weaker but equivalent form; the
rule
                                                  π—π—‹π—Žπ–Ύ ∈ π–€π—Š(𝐴, π‘Ž, 𝑏)
                                                                     Eq-E
                                                      π‘Ž=π‘βˆˆπ΄
becomes
                                                                         Eq-E
                                             π‘Ž = 𝑏 ∈ 𝐴 [𝗁 ∈ π–€π—Š(𝐴, π‘Ž, 𝑏)]
This weaker form has nicer computational properties, the most important one being that it is always deterministic and
terminating to check if it is possible to apply given a specific context while still allowing to locally definitions in a context
via implication or dependents types.
    This solution can be seen as an extension of simply encoding proofs in πœ†-terms; we have already noted that at the exten-
sional level there are not satisfying such encoding because of the propositional equality elimination rule, whose appearances
in the πœ†-terms would be confusing at the extensional level. From this point of view our solution can be thought of as real-
izing that adding the needed equalities to the context is enough, and that the locally scoped nature of the context provide
exactly the functionality we need to use equalities. Compared to using whole derivation trees we now use πœ†-terms plus
some context.
    A drawback of this solution is that it is complex to use even simple lemmas; to this problem we offer two solutions: to
manually use the cut elimination of the calculus or to introduce a new let in construct

                                                     𝗅𝖾𝗍 π‘₯ ∢= 𝑑1 ∈ 𝑇 𝗂𝗇 𝑑2

used to introduce locally typed definitions.
    As of this writing this let in in not yet implemented as there many different ways the needed functionality can be obtained
each with different compromises and it is not entirely clear which implementation method is the most adeguate to our project.
In our simple cases it is acceptable to manually perform the cut elimination to locally include the desired lemma so we plan
to implement a proper let in once there is a strong need for it.
1.1.3 πœ‰-Rule
Another difference between the calculus in Maietti’s work and ours is that our model validates the πœ‰-rule at the intensional
level, removing it would require a refactoring of the conversion to include a list of forbidden variables that block conversion.
Until this moment it was not a priority to implement this functionality, but still we have some feasible implementation paths.

2     The Interpretation
One of the main result of [Mai09] is a categorical proof of the validity of an interpretation of an extensional type theory
into an intensional one.
   In particular first a quotient model Q(mTT) is built on the theory mTT, this model is given as a category where objects
are pairs of (𝐴, =𝐴 ) (abbreviated to 𝐴= when =𝐴 it is clear from the context) where A is an intensional type called support
and
                                              π‘₯ =𝐴 𝑦 π‘π‘Ÿπ‘œπ‘π‘  [π‘₯ ∈ 𝐴, 𝑦 ∈ 𝐴]
is an equivalence relation on 𝐴; which means that there are terms 𝗋𝖿 𝗅𝐴 , π—Œπ—’π—†π—†π΄ and 𝗍𝗋𝖺𝐴 such that the following judgements
hold in mTT

                                          𝗋𝖿 𝗅𝐴 (π‘₯) ∈ π‘₯ =𝐴 π‘₯ [π‘₯ ∈ 𝐴]
                                π—Œπ—’π—†π΄ (π‘₯, 𝑦, 𝑒) ∈ 𝑦 =𝐴 π‘₯ [π‘₯ ∈ 𝐴, 𝑦 ∈ 𝐴, 𝑒 ∈ π‘₯ =𝐴 𝑦]
                            𝗍𝗋𝖺𝐴 (π‘₯, 𝑦, 𝑧, 𝑒, 𝑣) ∈ π‘₯ =𝐴 𝑧 [π‘₯ ∈ 𝐴, 𝑦 ∈ 𝐴, 𝑧 ∈ 𝐴, 𝑒 ∈ π‘₯ =𝐴 𝑦, 𝑣 ∈ 𝑦 =𝐴 𝑧]

In the following we will refers to the objects of Q(mTT) as setoids.
   The morphisms of Q(mTT) from (𝐴, =𝐴 ) to (𝐡, =𝐡 ) are all the well-typed terms of mTT

                                                                   𝑓 (π‘₯) ∈ 𝐡(π‘₯) [π‘₯ ∈ 𝐴]

that preserve the setoid equality, that is there exist a proof-term

                                          π—‰π—‹πŸ£ (π‘₯, 𝑦, 𝑧) ∈ 𝑓 (π‘₯) =𝐡 𝑓 (𝑦) [π‘₯ ∈ 𝐴, 𝑦 ∈ 𝐴, 𝑧 ∈ π‘₯ =𝐴 𝑦]

Equality between two morphisms f,g : (𝐴, =𝐴 ) β†’ (𝐡, =𝐡 ) holds if they maps equal elements of 𝐴= to equal elements of
𝐡= , that is if there is a proof-term
                                          π—‰π—‹πŸ€ (π‘₯) ∈ 𝑓 (π‘₯) =𝐡 𝑔(π‘₯) [π‘₯ ∈ 𝐴]
    Starting from the category Q(mTT) we can define also dependent setoids, given a setoid 𝐴= we say that

                                                                       𝐡= (π‘₯) [π‘₯ ∈ 𝐴= ]

is a dependent setoid if its support is a dependent type 𝐡(π‘₯) 𝑑𝑦𝑝𝑒 [π‘₯ ∈ 𝐴𝑠], if its relation is a dependent proposition

                                                    𝑦 =𝐡(π‘₯) 𝑦′ [π‘₯ ∈ 𝐴, 𝑦 ∈ 𝐡(π‘₯), 𝑦′ ∈ 𝐡(π‘₯)]

and if there is a substitution morphism
                                      π‘₯
                                    𝜎π‘₯12 (𝑑, 𝑦) ∈ 𝐡(π‘₯2 ) [π‘₯1 ∈ 𝐴, π‘₯2 ∈ 𝐴, 𝑑 ∈ π‘₯1 =𝐴 π‘₯2 , 𝑦 ∈ 𝐡(π‘₯1 )]

that satisfies the properties given by definition 4.9 in [Mai09], that is it needs to preserve the equality of 𝐡= , to be proof-
                                                                                                                         π‘₯
irrelevant by not depending on the particular choice of the proof-term 𝑑 ∈ π‘₯1 = π‘₯2 (so that we can write 𝜎π‘₯12 (𝑦) in place of
  π‘₯                                                                      π‘₯
𝜎π‘₯12 (𝑑, 𝑦)), there need to always exists the identity substitution 1 𝜎π‘₯11 (𝗋𝖿 𝗅𝐡(π‘₯1 ) (π‘₯1 ), –) and lastly substitution morphisms need
to be composable, that is we require
                                                      π‘₯     π‘₯                 π‘₯
                                                     𝜎π‘₯23 (𝜎π‘₯12 (𝑦)) =𝐡(π‘₯3 ) 𝜎π‘₯13 (𝑦)
    1 Which in general does not need to be the identity morphism, for example


                                     𝜎π‘₯π‘₯ (𝗋𝖿 𝗅𝐴 (π‘₯), 𝑀) ≑ 𝖾𝗅𝗂𝗆+ (𝑀, (𝑧).𝗂𝗇𝗅(𝜎π‘₯π‘₯ (𝗋𝖿 𝗅𝐴 (π‘₯)(π‘₯), 𝑧)), (𝑧).𝗂𝗇𝗋(𝜎π‘₯π‘₯ (𝗋𝖿 𝗅𝐴 (π‘₯)(π‘₯), 𝑧)))

if 𝑀 ∈ 𝐡 + 𝐢 [π‘₯ ∈ 𝐴].
in the appropriate context2 .
    Over these substitution we can define canonical isomorphism between two type 𝐡1 and 𝐡2 as the only 𝜎 between them.
In our code we have implemented these isomorphisms as with the predicate tau B B’ P.
    The reason these construction are needed is because the strong elimination rule of the extensional propositional equality
can derive a judgemental equality from just an hypothetical assumption. For an example we can see that a judgement like
π—π—‹π—Žπ–Ύ ∈ π–€π—Š(𝐢, 𝑐, 𝑑) [𝑐 ∈ 𝐢, 𝑑 ∈ 𝐢] is a well formed but not derivable judgement in emTT, but if we add one more assumption
to the context we obtain
                              π—π—‹π—Žπ–Ύ ∈ π–€π—Š(𝐢 + 𝐢, 𝗂𝗇𝗅(𝑐), 𝗂𝗇𝗅(𝑑)) [𝑐 ∈ 𝐢, 𝑑 ∈ 𝐢, β„Ž ∈ π–€π—Š(𝐢, 𝑐, 𝑑)]
which is derivable since we can use the rules π—‰π—‹π—ˆπ—‰ βˆ’ π—π—‹π—Žπ–Ύ and E-Eq to derive the judgement 𝑐 = 𝑑 ∈ 𝐢 from the given
context.
   This can be seen also in the fact that this judgement

    βˆ€π‘₯∈1 βˆ€π‘“ ∈Π(π–€π—Š(1,π‘₯,⋆),1) π–€π—Š(1, ⋆, π‘₯) β‡’
                                                                                     π–€π—Š(1, 𝖺𝗉𝗉𝗅𝗒(𝑓 , π–Ύπ—Š(1, ⋆)), 𝖺𝗉𝗉𝗅𝗒(𝑓 , π–Ύπ—Š(1, ⋆))) π‘π‘Ÿπ‘œπ‘π‘ 3
is derivable in the empty context even if there is a mismatch between the types of 𝑓 and π–Ύπ—Š(1, ⋆). Here for example we
would want to apply a canonical isomorphism to βŸ¦π–Ύπ—Š(1, ⋆)⟧4 so that its type match with the domain of βŸ¦π‘“ ⟧; after the
                                  βŸ¦π–€π—Š(1,π‘₯,⋆)⟧
correction we obtain 𝖺𝗉𝗉𝗅𝗒(𝑓 , πœβŸ¦π–€π—Š(1,⋆,⋆)⟧ (βŸ¦π–Ύπ—Š(1, ⋆)⟧)) which is well-typed in mTT
   From this introduction to the structure of the interpretation we can see that the method used to force the semantics of
emTT on mTT (via Q(mTT)) is to require the creation of suitable proof terms thus restricting the available language to a
subset that preserve the extensional semantic.
   We can see another more explicit application of this method in the interpretation of dependent products, indeed in this
case a function term of emTT is interpreted as a pair whose first projection is an intensional function and whose second
projection is a proof that the first projection preserves the setoid equalities of Q(mTT).
   From the point of view of the implementation this means that we have to produce various proof for each of the type
constructor we wish to include in our implementation.
   As of now we have implemented all that is needed to fully interpret the subset of emTT comprised of singletons, depen-
dent products, implications, universal quantifications, propositional equality. We believe that this subset exhibits enough
of the complexity of the interpretation and feel confident that the structure of our implementation will be able to cover also
the needs of the remaining constructors.

3     The Code
All the code was written in πœ†Prolog (in particular it was written for the ELPI implementation [Dun+15] of πœ†Prolog) and
make extensive use of many high level features that in other languages would not be available. Still our code does not use
many of the more advanced features offered by ELPI like constraint programming which we plan to use in the next phase
of the project when we will start to implement a kernel for the planned proof checker
   In this section we will show some extract of the code and explain how we obtained the desired functionalities.

3.1    Project Structure and Extensibility
We have divided the code 4 categories main, calc, test, debug. The first, main holds type declaration for the other
modules and define the general functionality of the theory shared by all type constructor of both levels. the second is a
folder containing a .elpi file for each type constructor; in each of these files we implement the typing and conversion
rules for that particular type constructor. For example in calc/setPi.elpi we keep all definitions related to dependent
products, like the implementation of Ξ -I or the interpretation of a function term.
of (lambda B F) (setPi B C) IE
    :- ofType B _ IE
    , (pi x\ locDecl x B => isa (F x) (C x) IE)
    .
   2 It is worth noting that in the Minimalist Foundation the only type that directly takes terms as arguments are the propositional equalities (written π–€π—Š

and 𝖨𝖽 in emTT and mTT respectively). For any 𝐡(π‘₯) 𝑑𝑦𝑝𝑒 [π‘₯ ∈ 𝐴] the only possible occurrences of π‘₯ are as an argument of π–€π—Š (resp. 𝖨𝖽 if in mTT),
together with the absence of strong elimination of terms towards types this means that conversion of types is always only reduced to conversion of terms
and that types containing no propositions cannot be truly dependent. For example the canonical way to construct a type list of type 𝐢 of length 𝑛 is via a
dependent sum Ξ£(π–«π—‚π—Œπ—(𝐢), (𝑙).𝑃 (𝑙, 𝑛)) where 𝑃 (𝑙, 𝑛) is the proposition length of 𝑙 equals 𝑛.
   4 In this work we use the double bracket notation for the interpretation instead of the Ξ»I used in [Mai09].
interp (lambda B F) R
    :- spy(of (lambda B F) (setPi B C) ext)
    , spy(interp (setPi B C) (setSigma (setPi Bi Ci) H ))
    , macro_interp B ( x\_\_\xi\_\_\ interp (F x) (Fi xi))
    , setoid_eq B EquB
    , macro_interp B (x1\x2\h\x1i\x2i\hi\tau_proof_eq
        (F x1)
        (F x2)
        (C x2)
        (K_EQU x1i x2i hi))
    , R = pair (setPi Bi Ci) (H) (lambda Bi Fi)
        (forall_lam Bi x1\
         forall_lam Bi x2\
             forall_lam (EquB x1 x2) h\ K_EQU x1 x2 h)
    .

   The last two, test and debug, hold utilities to quickly test the predicates from main and calc in addition to useful
predicates needed to inspect the runtime behaviour of the type checking and of the interpretation.

3.2     Explanation of the Predicates
We can divide the predicate defined in our implementation in three groups: judgement predicates, conversion predicates
and interpretation predicates.

3.2.1 of, isa, ofType, isaType and locDecl
In the first group are the predicate corresponding to the judgements 𝐴 𝑑𝑦𝑝𝑒 [Ξ“] or π‘Ž ∈ 𝐴 [Ξ“]. We use the
of Term Type Level predicate to represent the judgement 𝑇 π‘’π‘Ÿπ‘š ∈ 𝑇 𝑦𝑝𝑒 [Ξ“] at the level indicated by the third argu-
ment, we also define another predicate isa Term Type Level to check if the inferred type of Term is convertible with
Type at level Level; a major difference between these two predicate is that for of the first and last argument are inputs
and the middle is an output, while for isa all three arguments are inputs. Similarly ofType Type Kind Level and
isaType Type Kind Level implement the corresponding functionality at the type level.
   locDecl is used to form the context where of will look up the types of variables via the clause
of X Y _ :- locDecl X Y.. We have no need to have a parameter to indicate the level for locDecl because exten-
sional and intensional variables are disjoint and we can assume that we will never try to compute the extensional type of
an intensional variable. Moreover as of now our implementation validates the πœ‰-rule at the intensional level so that also the
conversion can be level agnostic.

3.2.2    pts_* Predicates
In the Minimalist Foundation in general the kind of a type is not unique but admit a most general unifier, this unifier will
be the intended output of ofType and the logic to find it is implemented in the various PTS predicates. As an example a
dependent product Ξ (𝐡, 𝐢) is never a proposition and is a set if both 𝐴 𝑠𝑒𝑑 and 𝐡(π‘₯) 𝑠𝑒𝑑 [π‘₯ ∈ 𝐴] hold.
   This can be expressed with the following code:

pts_leq A A.
pts_leq set col.
pts_leq props set.
pts_leq propc col.
pts_leq props propc.
pts_leq props col.

pts_fun A B set :- pts_leq A set, pts_leq B set, !.
pts_fun _ _ col.

Other variations allow to decide the most general kind of universal/existential quantifiers, connectives or lists.
3.2.3 conv, dconv, hnf and hstep
While conversion at the intensional level is almost straightforward at the extensional level we need to be more careful as a
naive implementation would be extremely inefficient due to the rule Eq-E

                                                π—π—‹π—Žπ–Ύ ∈ π–€π—Š(𝐢, 𝑐, 𝑑) [𝑐 ∈ 𝐢, 𝑑 ∈ 𝐢]
                                                             𝑐=π‘‘βˆˆπΆ
   The solution we have chose to implement consist of two step, first we weaken the Eq-E rule to perform a context lookup
instead of trying to prove a judgement and then we divide the conversion in three different operations.

conv A A :- !.
conv A B :- locDecl _ (propEq _ A B).
conv A B :- (hnf A A’), (hnf B B’), (dconv A’ B’).

   The predicate that will actually implement the conversion judgement will be conv, for this predicate both arguments will
be inputs. When called between two terms the predicate will first check if the two terms are unifiable at the meta-level, if
this fail then it will search for a context declaration that states the equality it wants; if this also fail then if will reduces both
term to head normal form and apply a dconv step on the head normal forms. dconv role is to propagate the conversion to
subterms; it assumes to receive two terms in head normal form (as it will only ever be called on output of hnf) and and will
check for conversion on the corresponding subterms of its arguments.

dconv (setSigma B C) (setSigma B’ C’)
    :- (conv B B’)
    , (pi x\ locDecl x B => conv (C x) (C’ x))
    .
dconv (pair B C BB CC) (pair B’ C’ BB’ CC’)
    :- (conv B B’)
    , (pi x\ locDecl x B => conv (C x) (C’ x))
    , (conv BB BB)
    , (conv CC CC’)
    .
dconv (elim_setSigma Pair M MM) (elim_setSigma Pair’ M’ MM’)
    :- (conv Pair Pair’)
    , (of Pair (setSigma B C))
    , (pi z\ locDecl z (setSigma B C) => conv (M z) (M’ z))
    , (pi x\ pi y\ locDecl x B
        => locDecl y (C x)
        => conv (MM x y) (MM’ x y))
    .

In general it will be needed to declare a dconv clause for each new non-constant constructor (constant constructors are
all handled by a dconv A A :- !. clause) the main reason to use dconv is that is does not trigger the context lookup
associated with the Eq-E rule.

3.2.4 tau, setoid_*, proof_eq and interp
The main entry point of the interpretation is the predicate interpr, for example the typical structure of our test is

  1. Construct a suitable extensional type

  2. Typecheck the type at the extensional level

  3. Interpret the type to the instensional level

  4. Typecheck this last result.

During the interpretation many other auxiliary predicates are needed. Here we will give a description of the most important
ones.
   As explained above during the interpretation we need to correct the types of interpreted terms to compensate for the
different rules at the two levels; in our code we have chosen to implement this functionality with the predicate tau T1 T2 P.
tau takes in input 2 mutually convertible extensional types and return a πœ†Prolog function which convert elements of type
T1 to elements of type T2. It is important to note that P is a pure πœ†-function at the meta-level and thus in the conversion
between the two types T1 and T2 we can not use backtracking or patter matching; thankfully a pure function is all we need
because canonical isomorphism only apply πœ‚-expansions and do not actually inspect the term they are applied to, only its
type and that is completely know to tau. As an example we show how this type correction would look like for a disjoint
sum for a variable 𝑀 ∈ ⟦𝐡(π‘₯) + 𝐢(π‘₯)⟧ [⟦π‘₯ ∈ 𝐴⟧] when transitioning to a type ⟦𝐡(π‘₯β€² ) + 𝐢(π‘₯β€² )⟧
                                          β€²                            β€²              β€²
                                        𝜎π‘₯π‘₯ (𝑀) ≑ 𝖀𝗅𝗂𝗆+ (𝑀, (𝑦1 ).𝜎π‘₯π‘₯ (𝑦1 ), (𝑦2 ).𝜎π‘₯π‘₯ (𝑦2 ))

    We can see that we have applied structural recursion over the types even when computing a function over terms, this is
helpful when treating types with more than one canonical constructor and when dealing with variables, but in general it is
necessary since we cannot assume that all terms will have a canonical head normal form.
    It is important to note that all predicates working at the interpretation level will always assume to handle well formed
and well typed terms and types so that we do not need to fill our code with an enormous amount of runtime checks.
    A related predicate is tau_eq, which at the moment is conceptually the most complex predicate in the code and its aim
is to implement the second and third property of required of substitution morphisms: preserving equalities of Q(mTT). In
formulae this means that the following hypothetical judgment should be derivable for some choice of 𝑑2
          π‘₯             π‘₯
  𝑑2 ∈ 𝜎π‘₯12 (𝑦) =𝐡(π‘₯2 ) 𝜎π‘₯12 (𝑦′ ) [π‘₯1 ∈ 𝐴, π‘₯2 ∈ 𝐴, 𝑑1 ∈ π‘₯1 = π‘₯2 ,
                                                                                    𝑦 ∈ 𝐡(π‘₯1 ), 𝑦 ∈ 𝐡(π‘₯2 ), 𝑀 ∈ 𝑦 =𝐡(π‘₯1 ) 𝑦′ ] (1)

tau_eq role is exactly to find a constructive and functional (after taking as inputs 𝐡(π‘₯1 ) and 𝐡(π‘₯2 )) transformation that
produces such 𝑑2 from 𝑦, 𝑦′ and 𝑑1 .
   This predicate becomes most useful during the interpretation of dependent products, since in Q(mTT) a dependent
function is represented as a pair containing a mTT function 𝑓 and a proof that 𝑓 preserve the extensionality of Q(mTT),
similarly to how it is done in [Nec97].
   Another problem is that in dealing with dependent products is that we need to automatically generate a proof that the
interpretations of emTT function will preserve the equalities of Q(mTT). The generation of these proofs is handled by the
predicate proof_eq.
   A family of related predicates is built around the setoids of Q(mTT), the principal one setoid_eq takes an exten-
sional type in input and return an mTT proposition representing the setoid equality of the interpretation of the given type,
three related predicates setoid_refl, setoid_symm and setoid_trans produce proofs that the proposition returned by
setoid_eq is indeed an equivalence relation.

References
[Dun+15]      Cvetan Dunchev et al. β€œELPI: Fast, Embeddable,βˆ–lambda Prolog Interpreter”. In: Logic for Programming,
              Artificial Intelligence, and Reasoning. Springer. 2015, pp. 460–468.
[Mai09]       Maria Emilia Maietti. β€œA minimalist two-level foundation for constructive mathematics”. In: Annals of Pure
              and Applied Logic 160.3 (2009), pp. 319–354.
[MS05]        Maria Emilia Maietti and Giovanni Sambin. β€œToward a minimalist foundation for constructive mathematics”.
              In: From Sets and Types to Topology and Analysis: Practicable Foundations for Constructive Mathematics 48
              (2005), pp. 91–114.
[Nec97]       George C Necula. β€œProof-carrying code”. In: Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on
              Principles of programming languages. ACM. 1997, pp. 106–119.
[NM88]        Gopalan Nadathur and Dale Miller. β€œAn overview of Lambda-PROLOG”. In: (1988).