=Paper=
{{Paper
|id=Vol-2951/paper2
|storemode=property
|title=Left Recursion by Recursive Ascent
|pdfUrl=https://ceur-ws.org/Vol-2951/paper2.pdf
|volume=Vol-2951
|authors=Roman R. Redziejowski
|dblpUrl=https://dblp.org/rec/conf/csp/Redziejowski21
}}
==Left Recursion by Recursive Ascent==
Left Recursion by Recursive Ascent Roman R. Redziejowski Abstract Recursive-descent parsers can not handle left recursion, and several solutions to this problem have been suggested. This paper presents yet another solution. The idea is to modify recursive-descent parser so that it reconstructs left-recursive portions of syntax tree bottom-up, by "recursive ascent". Keywords parsing, recursive descent, left recursion 1. Introduction Recursive-descent parser is a collection of "parsing procedures" that correspond to different syntactic units and call each other recursively. Some of these procedures must choose what to call next, and the choice is made by looking at the input ahead, on depth-first basis. This process does not work if the grammar is left-recursive: the procedure may indefinitely call itself, facing the same input. One suggested solution [1] counts the number of recursive calls of each procedure and stops when it reaches a pre-set bound. The process is repeated with the bound starting with 1 and gradually increased. Another solution [2, 3] saves the input consumed by invocations of parsing procedures and tricks the parser to use the saved result instead of making recursive call. The third idea sees parsing as reconstruction of inputβs syntax tree. The classical process builds that tree top-down, but [4] suggests that portions of the tree involved in left recursion can be reconstructed starting from the bottom. We present here an approach based on this idea. To reconstruct portions of the tree bottom-up we use procedures that call each other recursively. These procedures can be regarded as new parsing procedures. We incorporate them into the recursive-descent parser, and the result is recursive-descent parser for a new grammar, referred to as the "dual grammar". The dual grammar is (normally) not left-recursive and defines the same language as the original one. After the necessary definitions in Section 2, Section 3 gives an example of recursive ascent, and shows that the procedures performing it can be seen as parsing procedures for new non- terminals. Section 4 incorporates these new non-terminals into the original grammar, thus obtaining the dual grammar. Two Propositions state the essential properties of that grammar. Section 5 discusses handling choice expressions, and Section 6 discusses the elements omitted to simplify the presentation. Section 7 contains some final remarks: relation to other solutions and unsolved problems. Proofs of the Propositions are given in the Appendix. 29th international Workshop on Concurrency, Specification and Programming (CS&Pβ21), September 2021, Berlin, Germany " roman@redz.se (R. R. Redziejowski) ~ http://www.romanredz.se (R. R. Redziejowski) Β© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) 2. Basic concepts We consider a BNF-like grammar πΊ = (N, Ξ£, E, β°, ππ ) with finite set N of non-terminals, finite set Ξ£ of terminals, finite set E of expressions, function β° from non-terminals to expressions, and the start symbol ππ . An expression is one of these: β’ π β Ξ£ ("terminal"), β’ π β N ("non-terminal"), β’ π1 . . . ππ ("sequence"), β’ π1 | . . . |ππ ("choice"), where each of ππ is an expression. The function β° is defined by a set of rules of the form π β π, where π is the expression assigned by β° to non-terminal π . We often write π β π to mean π = β°(π ). To simplify the presentation, we did not include the empty string π among expressions. In the following, expressions π β Ξ£ and π β N will be viewed as special cases of choice expression with π = 1. Non-terminal π β π1 . . . ππ derives the string π1 . . . ππ of symbols, while π β π1 | . . . |ππ derives one of π1 , . . . , ππ . The derivation is repeated to obtain a string of terminals. This process is represented by syntax tree. Figures 1 and 2 are examples of grammar πΊ, showing syntax trees of strings derived from the start symbol. The set of all strings derived from π β N is called the language of π and is denoted by β(π ). N = {Z,A,A1,B,B1,B2} Z Ξ£ = {a,b,x,y} HH x A y ππ = Z A1 Z β x A y H H B a A β A1 | a A1β B a B1 B β B1 | B2 | b H H A b B1β A b B2β B b a Figure 1: Example of grammar πΊ and syntax tree of βxabayβ first For π β N and π β N βͺ Ξ£, define π ββ π to mean that parsing procedure for π may call that for π on the same input. We have thus: first β’ If π β π1 . . . ππ , π ββ π1 . first β’ If π β π1 | . . . | ππ , π ββ ππ for 1 β€ π β€ π. First first First Let ββ be the transitive closure of ββ. Non-terminal π β N is recursive if π ββ π . The set of all recursive non-terminals of πΊ is denoted by R. All non-terminals in Figure 1 except Z, and all non-terminals in Figure 2 are recursive. N = {E,E1,F,F1} E Ξ£ = {a,b,x,z} ππ = E E1 HH E + F E β E1 | F E1β E + F F F1 F β F1 | a H H F1 F * a F1β F * a H H F * a a a Figure 2: Example of grammar πΊ and syntax tree of βa*a+a*aβ First First Define relation between recursive π1 , π2 β R that holds if π1 ββ π2 ββ π1 . This is an equivalence relation that partitions R into equivalence classes. We call them recursion classes. The recursion class of π is denoted by C(π ). All non-terminals in Figure 1 belong to the same class; the grammar of Figure 2 has two recursion classes: {E,E1} and {F,F1}. In syntax tree, the leftmost path emanating from any node is a chain of nodes connected first by ββ. Suppose π1 and π2 belonging to the same recursion class C appear on the same leftmost path. Any non-terminal π between them must also belong to C, which follows from First First First the fact that π1 ββ π ββ π2 ββ π1 . It means that members of C appearing on the same leftmost path must form an uninterrupted sequence. We call such sequence a recursion path of class C. The only recursion path in Figure 1 is the whole leftmost path from the first A without final a. The syntax tree in Figure 2 has two recursion paths, one starting with E and another with F. Let π β π1 . . . ππ be on a recursion path. The next item on the leftmost path is π1 , and it First must belong to C(π ) to ensure π ββ π . It follows that the last item on a recursion path must be π β π1 | . . . |ππ where at least one of ππ is not a member of C(π ). Such π is called an exit of C(π ), and its alternatives outside C(π ) are the seeds of C(π ). In Figure 1, both A and B are exits, and the seeds are a and b. In Figure 2, E and F are exits of their respective classes, and the corresponding seeds are F and a. A recursive π that appears in expression for a non-terminal outside C(π ), or is the start symbol, is an entry of its recursion class. It is the only non-terminal that can be first in a recursion path. The recursion class of Figure 1 has entry A, and recursion classes of Figure 2 have E and F as their respective entries. To simplify presentation, we assume that each class has only one entry. 3. Recursive ascent Recursive descent constructs the syntax tree implicitly, as the structure of its procedure calls. We assume that to serve any purpose, this tree has to be somehow registered. Thus, we assume that parsing procedures include "semantic actions" that actually build data structure representing the tree. We suggest that parsing procedure for entry πΈ builds its syntax tree in a special way, illustrated below by parsing the string βxabayβ from Figure 1. The parsing starts with procedure for non-recursive Z that, in the usual way, calls the procedures for βxβ, A, and βyβ. After consuming βxβ, Z applies A to βabayβ. The procedure for entry A is not the usual choice between A1 and a; instead, it reconstructs the subtree A as follows: 1. The recursion path from A must end with a seed. The seeds are a and b. Decide to try a. 2. Apply expression a to βabayβ. It consumes βaβ, leaving βbayβ. Use a as start of the tree. 3. The only possible parent of seed a in the syntax tree is its exit A. Add A on top of the tree, with a as child. 4. We have a tree for A, but decide to continue. 5. A appears only in B1βA b, so B1 is the only possible parent of A. 6. The tree for A is already constructed. Complete B1 by applying expression b to the remaining βbayβ. It consumes b, leaving βayβ. Add B1 on top of the tree, with A,b as children. 7. B1 appears only in BβB1|B2|b, so B is the only possible parent of B1. 8. Add B on top of the tree, with B1 as child. 9. The possible parents of B are A1βB a and B2βB b. Decide for A1. 10. The tree for B is already constructed. Complete A1 by applying expression a to βayβ. It consumes βaβ, leaving βyβ. Add A1 on top of the tree with B,a as children. 11. The only possible parent of A1 is AβA1. 12. Add A on top of the tree, with A1 as child. 13. We have a tree for A, and decide to stop. Return the tree of the consumed βabaβ to Z, which continues to consume y and constructs the tree for βxabayβ. In general, the procedure for entry πΈ chooses and executes one of procedures that correspond to different seeds. In the example, the choice is made in step 1, and the selected procedure consists of steps 2-13. The procedure for seed π starts with constructing the tree for π and follows by executing a procedure that adds node for the containing exit π; then it proceeds to grow the resulting tree towards πΈ. These are the steps 2 respectively 3-13 in our example. This may be seen as parsing procedure that implements a new expression for πΈ: πΈ β π1 $π1 | . . . |ππ $ππ (1) where ππ are all the seeds of C(πΈ) and ππ are the exits containing them. We can see $π as a special case of procedure $π that adds a new node π on top of previously constructed tree. This is the case in steps 6, 8, 10, and 12 of our example. The procedure then continues to build the tree, which is in each case done in the subsequent steps up to the step 13. π β π π2 . . . ππ π β π1 | . . . |π | . . . |ππ π π H TT HH π π π2 . . . ππ π - TT TT TT TT TT π₯ π₯ π₯2 . . . π₯π π₯ Figure 3: Adding R The way of adding π on top of tree π depends on π as illustrated in Figure 3. If π β π1 π2 . . . ππ (where π1 = π ), $π builds the trees for π2 , . . . , ππ , binds them with π into the tree for π . The whole procedure can be seen as parsing procedure for new non-terminal: $π β π2 . . . ππ #π (2) where π2 , . . . , ππ are procedures for these expressions and #π continues the growing. If π β π1 | . . . |ππ (where π = ππ for some π), the procedure just adds π on top of π and proceeds to grow that tree. This can be seen as parsing procedure for: $π β #π . (3) Procedure #π consists of choosing a possible parent of node π and adding that parent to the tree. This can be seen as parsing procedure for another new non-terminal: #π β $π1 | . . . |$ππ (4) first where ππ are all members of C such that ππ ββ π . In our example, the choice (4) is performed in steps 5, 7, 9, and 11. If π is identical to πΈ, #π can choose to terminate building of the tree. Therefore, #πΈ must have an alternative to allow that: #πΈ β $π1 | . . . |$ππ |$π (5) where procedure $π returns the tree constructed for πΈ. It was not chosen in step 4, but terminated the process in step 13. 4. The dual grammar As shown above, parsing procedure for an entry expression can be implemented by a set of procedures that reconstruct portion of syntax tree bottom-up, by "recursive ascent". These procedures can be seen as parsing procedures for new non-terminals. We can add these non- terminals to the original grammar. The result for the grammar of Figure 1 is shown in Figure 4. Here we replaced the expression for entry A by (1), and added the new non-terminals appearing in it, and in their expressions. The other left-recursive non-terminals are no longer used, so they are omitted. The non-recursive non-terminals, here represented by Z, are left unchanged. This Z β x A z $B β #B A β a $A | b $B $B1 β b #B1 $A β #A $B2 β b #B2 $A1 β a #A1 #B β $A1 | $B2 #A β $B1 | $π #B1 β $B #A1 β $A #B2 β $B Figure 4: Dual grammar for grammar of Figure 1 E β F $E F β a $F $E β #E $F β #F $E1 β + F #E1 $F1 β * a #F1 #E β $E1 | $π #F β $F1 | $π #E1 β $E #F1 β $F Figure 5: Dual grammar for grammar of Figure 2 is the "dual grammar" of the grammar in Figure 1. Figure 5 shows the dual grammar obtained in a similar way for the grammar of Figure 2. In general, the dual grammar is obtained as follows: β’ For each entry πΈ replace β°(πΈ) by π1 $π1 | . . . |ππ $ππ where π1 , . . . , ππ are all seeds of C(πΈ), and ππ is the exit containing ππ . β’ Replace each recursive π β π1 π2 . . . ππ by $π β π2 . . . ππ #π . β’ Replace each recursive π β π1 | . . . |ππ by $π β #π . β’ For each π β R create: β #π β $π1 | . . . |$ππ if π ΜΈ= πΈ, β #π β $π1 | . . . |$ππ |$π if π = πΈ, first where π1 , . . . , ππ are all members of C(π ) such that ππ ββ π , and πΈ is the entry of C. The dual grammar is an n-tuple π· = (Nπ , Ξ£, Eπ , β°π , ππ ). Its set Nπ consists of: β’ The non-recursive members of N; β’ The entries to recursion classes; β’ The $-expressions; β’ The #-expressions. In the following, the set of all non-recursive members of N is denoted by R, and the set of all entries by RE . They appear as non-terminals in both πΊ and π·. The set R βͺ RE of these common symbols is denoted by Nπ . Functions β° and β°π assign the same expressions to members of R. The two important facts about the dual grammar are: Proposition 1. The dual grammar is left-recursive only if the original grammar contains a cycle, that is, a non-terminal that derives itself. Proof is found in the Appendix. Proposition 2. βπ· (π ) = β(π ) for all π β Nπ . Proof is found in the Appendix. The start symbol ππ is either non-recursive or an entry, so it appears in both grammars, and thus both grammars define the same language. It means that recursive-descent parser for the dual grammar is a correct parser for the original grammar, and its "semantic actions" build syntax tree for the original grammar. 5. Implementing the choices The construction of dual grammar introduces a number of choice expressions. We assume that they are treated in the same way as those originally present in the grammar. If dual grammar has the LL(π) property, the choice can be made by looking at the next π input terminals. The dual grammar of Figure 4 is LL(1), so decisions in steps 1, 5, 10, and 14 of the example could well be made by looking at the next terminal. If the dual grammar is not LL(π), the possible option is trial-and-error with backtracking. As this may result in exponential processing time, the practical solution is limited backtracking, recently being popular as the core of Parsing Expression Grammar (PEG) [5]. (As a matter of fact, the three solutions named in the Introduction are all suggested as modifications to PEG.) The problem with limited backtracking is that it may fail to accept some legitimate input strings. For some grammars, which may be called "PEG-complete", limited backtracking does accept all legitimate strings. Thus, if the dual grammar is PEG-complete, it provides a correct parser for the original grammar. There exist a sufficient condition that may be used to check if the dual grammar is PEG- complete [6]. The checks for LL(π) and PEG-completeness are carried on dual grammar. It is the subject of further research how they can be replaced by checks applied to the original grammar. 6. Towards full grammar We made here two simplifying assumptions. First, we did not include empty string π in the grammar. Second, we assumed a unique entry to each recursion class. The result of adding π is that some expressions may derive empty string. These expressions are referred to as nullable, and can be identified as such by analyzing the grammar. Nullable π1 in π β π1 . . . ππ invalidates the whole analysis in Section 2. Nullable π2 . . . ππ in recursive π β π1 π2 . . . ππ invalidates the proof of Proposition 1. Except for these two cases, our approach seems to work with empty string. The assumption of unique entry per recursion class is not true for many grammars; for example, the class of Primary in Java has four entries. The exit alternative $π must appear in #πΈ for the entry πΈ that actually started the ascent. This can be solved by having a separate version of #πΈ and $πΈ for each entry, multiplying the number of these non-terminals by the number of entries. A practical shortcut can exploit the fact that the ascents are nested. One can keep the stack of entries for active ascents and modify the procedure for #π to check if π is the entry to current ascent. 7. Final remarks The traditional way of eliminating left recursion is to rewrite the grammar so that left recursion is replaced by right recursion. It can be found, for example, in [7], page 177. The process is cumbersome and produces large results; most important, it loses the spirit of the grammar. Our rewriting is straightforward and produces correct syntax tree for the original grammar. The methods described in [1, 2, 3] use memoization tables and special code to handle them. The procedures are repeatedly applied to the same input. Our approach is in effect just another recursive-descent parser. Our idea of recursive ascent is in principle the same as that of [4], but this latter uses specially coded "grower" to interpret data structures derived from the grammar. As indicated in Section 6, our method does not handle some cases of nullable expressions. Among them is the known case of "hidden" left recursion: π΄ β π1 π2 . . . ππ becomes left recursive when π1 derives π. Another unsolved problem is πΈ β πΈ +πΈ | π, which results in a right-recursive parse for πΈ. References [1] S. Medeiros, F. Mascarenhas, R. Ierusalimschy, Left recursion in Parsing Expression Gram- mars, Science of Computer Programming 96 (2014) 177β190. [2] A. Warth, J. R. Douglass, T. D. Millstein, Packrat parsers can support left recursion, in: Proceedings of the 2008 ACM SIGPLAN Symposium on Partial Evaluation and Semantics- based Program Manipulation, PEPM 2008, San Francisco, California, USA, January 7-8, 2008, pp. 103β110. [3] L. Tratt, Direct left-recursive parsing expression grammars, Technical Report EIS-10-01, School of Engineering and Information Sciences, Middlesex University, 2010. [4] O. Hill, Support for Left-Recursive PEGs, 2010. https://github.com/orlandohill/peg-left-recursion. [5] B. Ford, Parsing expression grammars: A recognition-based syntactic foundation, in: N. D. Jones, X. Leroy (Eds.), Proceedings of the 31st ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2004, ACM, Venice, Italy, 2004, pp. 111β122. [6] R. R. Redziejowski, More about converting BNF to PEG, Fundamenta Informaticae 133 (2014) 177β191. [7] A. V. Aho, R. Sethi, J. D. Ullman, Compilers, Principles, Techniques, and Tools, Addison- Wesley, 1987. A. Proof of Proposition 1 firstD For π β Nπ and π β Nπ βͺ Ξ£, define π ββ π to mean that parsing procedure π may call parsing procedure π on the same input: firstD first (a) For π β R, ββ is the same as ββ. firstD (b) For π β RE , π ββ π for each seed π of C(π ). firstD (c) For $π β π2 . . . ππ #π , $π ββ π2 . firstD (d) For $π β #π , $π ββ #π . firstD first (e) #π ββ $ππ for each ππ β C(π ) such that ππ ββ π . FirstD FirstD Grammar π· is left-recursive if π ββ π for some π β Nπ , where ββ is the transitive closure firstD of ββ. FirstD FirstD Suppose π· is left-recursive, that is, exist π1 , π2 , . . . , ππ β Nπ such that π1 ββ π2 ββ FirstD . . . ββ ππ and ππ = π1 . We start by showing that none of them can be in Nπ . First Assume, by contradiction, that π1 β Nπ . We show that in this case π2 β Nπ and π1 ββ π2 . first (Case 1) π1 β R. Because β°π (π1 ) = β°(π1 ), π2 is in N, and thus in Nπ . We have π1 ββ π2 . (Case 2) π1 β RE . According to (b), π2 is a seed of C(π1 ), which is either non-recursive or an First entry, and thus is in Nπ . For a seed π2 of C(π1 ) holds π1 ββ π2 . (Conclusion) The above can be repeated with π2 , . . . , ππβ1 to show that if any of ππ is in Nπ , First then for all π, 1 β€ π β€ π ππ β Nπ and ππ ββ ππ . Thus, all ππ belong to R and are all in the same recursion class. As none of ππ belongs to R, they must all be in RE . Then, in particular, π1 β RE . But, according to (b), π2 is a seed of C(π1 ), that cannot be a member of C(π1 ). Thus, π1 β/ Nπ . It follows that each ππ is $π π or #π π for some π π β R. If π π β π2 . . . ππ , we have from (c) firstD $π π ββ π2 , and π2 β Nπ . That means all π π are choice expressions. Thus, the sequence of ππ is a repetition of $π π , #π π+1 , $π π+2 where, according to (d), π π+1 = π π , and according to (e), π π+2 β . . . |π π+1 | . . . . That means π π+2 derives π π , and in π/2 steps derives itself. β‘ B. Proof of Proposition 2 The proof is in terms of syntax trees. We write πβ π₯ for syntax tree of π₯ with root π. We write πβ [π1 β π₯1 +Β· Β· Β·+ππ β π₯π ]π₯ to represent syntax tree with root π and children π1 β π₯1 , . . . , ππ β π₯π where π₯1 . . . π₯π = π₯. To distinguish between the trees according to grammar πΊ (πΊ-trees) and those according to grammar π· (π·-trees) we use the symbols βπΊ respectively βπ· . Assuming that a terminal derives itself, we show that every string π₯ derived from π β Nπ βͺ Ξ£ according to πΊ can be also derived from π according to π· and vice-versa. The proof is by induction on height of syntax trees. (Induction base) Syntax tree of height 0. This is the syntax tree of a terminal. The terminals and their syntax trees are identical in both grammars. (Induction step) Assume that the stated property holds for all strings π€ having syntax tree of height β β₯ 0 or less. Lemmas 1 and 2 below show that it holds then for syntax trees of height β + 1. Lemma 1. Assume that for each πβπΊ π€ of height β β₯ 0 with π β Nπ βͺ Ξ£ exists πβπ· π€. Then the same holds for each πβπΊ π€ of height β + 1. Proof. Take any π βπΊ π€ of height β + 1 where π β Nπ . (Case 1) π β R. It means π β π1 . . . ππ or π β π1 | . . . |ππ . We have π βπΊ [π1 βπΊ π€1 + Β· Β· Β· + ππ βπΊ π€π ]π€ respectively π βπΊ [ππ βπΊ π€]π€. Each of the subtrees ππ βπΊ π€π has height β or less and each ππ is in Nπ βͺ Ξ£. By assumption, exists ππ βπ· π€π for each π. The required π·-tree is π βπ· [π1 βπ· π€1 + Β· Β· Β· + ππ βπ· π€π ]π€ respectively π βπ· [ππ βπ· π€]π€. first first first (Case 2) π β RE . The tree π βπΊ π€ has the leftmost path π π ββ . . . ββ π 1 ββ π 0 where π π = π , π π β C(π ) for π β₯ π > 0, and π 0 is a seed of C(π ). Define π₯π to be the string derived by πΊ from π π , for 0 β€ π β€ π. We have thus π€ = π₯π . Define π€0 be the string derived by πΊ from π 0 , so the subtree for π 0 is π 0 βπΊ π€0 . Let 1 β€ π β€ π. If π π β π1 π2 . . . ππ , the subtree for π π is π π βπΊ [π πβ1 βπΊ π₯πβ1 + π2 βπΊ π£2 + Β· Β· Β· + ππ βπΊ π£π ]π₯π . Define π€π = π£2 . . . π£π , so π₯π = π₯πβ1 π€π . if π π β π1 | . . . |ππ , the subtree for π π is π π βπΊ [π πβ1 βπΊ π₯πβ1 ]π₯π Define π€π = π, so we have again π₯π = π₯πβ1 π€π . One can easily see that π€ = π₯π = π€0 . . . π€π . Define π¦π to be the string derived by π· from $π π , and π§π to be the string derived by π· from #π π , for 0 β€ π β€ π. For entry π π exists the tree #π π βπ· π. If $π π β #π π , exists the tree $π π βπ· [#π π βπ· π]π = π€π . If $π π β π2 . . . ππ #π π , exists by assumption D-tree for each of ππ , deriving, respectively, π£π . Thus, exists the tree $π π βπ· [π2 βπ· π£2 + πππ‘π + ππ βπ· π£π + #π π βπ· π]π£2 . . . π£π π = π€π . Define π¦ π = π€π . Suppose that exists the tree $π π βπ· π¦π . Then exists the tree #π πβ1 [βπ· $π π βπ· π¦π ]π¦π . By construc- tion similar to the above, w find the D-tree for $π πβ1 deriving π¦πβ1 = π€πβ1 π¦π . At the end, we find the D-tree for $π 1 deriving π¦1 = π€1 π¦2 . By assumption there exists D-tree for the seed π 0 deriving π€0 . For entry π π we have π π β π 0 $π 1 , which gives the D-tree π π βπ· [π 0 βπ· π€0 + $π π βπ· π¦1 ]π€0 π¦1 . One can easily see that π¦1 = π€1 . . . π€π , so we have a D-tree for π = π π deriving π€ = π€0 . . . π€π . β‘ Lemma 2. Assume that for each πβπ· π€ of height β β₯ 0 with π β Nπ βͺ Ξ£ exists πβπΊ π€. Then the same holds for each πβπ· π€ of height β + 1. Proof. Take any π βπ· π€ of height β + 1 where π β Nπ . (Case 1) π β R. It means π β π1 . . . ππ or π β π1 | . . . |ππ . We have π βπ· [π1 βπ· π€1 + Β· Β· Β· + ππ βπ· π€π ]π€ respectively π βπ· [ππ βπ· π€]π€. Each of the subtrees ππ βπ· π€π has height β or less and each ππ is in Nπ βͺ Ξ£. By assumption, exists ππ βπΊ π€π for each π. The required πΊ-tree is π βπΊ [π1 βπΊ π€1 + Β· Β· Β· + ππ βπΊ π€π ]π€ respectively π βπΊ [ππ βπΊ π€]π€. (Case 2) π β RE . The D-tree of π has root π , node π for seed of C(π ),and nodes $π π , #π π for 1 β€ π β€ π. Denote π π = π and π 0 = π. Denote π¦π the string derived by $π π and π§π that derived by #π π . We have π§π = π¦π+1 for π < π and π§π = π. Denote by π€0 the string derived by π 0 and by π€ one derived by π = π π . The D-tree for π π is π π βπ· [π 0 βπ· π€0 + $π 1 βπ· π¦1 ]π€0 π¦1 = π€. If $π π β π2 . . . ππ #π π , we have π¦π = π£2 . . . π£π π§π . where π£π is the string derived by ππ . Defining π€π = π£2 . . . π£π by π€π , we have π¦π = π€π π§π . If $π π #π π , we have π¦π = π§π . Defining π€π = π, we have again π¦π = π€π π§π . We have π§π = π¦( π + 1) for π < π and π§π = π, so π¦π = π€π π¦( π + 1) for π < π and π¦π = π€π . One can easily see that π€ = π€0 . . . π€π . By assumption exists G-tree π 0 βπΊ π€0 . Suppose we have the G-tree for π πβ1 where 1 β€ π β€ π that derives π₯πβ1 . Suppose π π β π ( π β 1)π2 . . . ππ . By assumption exist G-trees that derive π£2 . . . π£π from π2 . . . ππ , so exists G-tree for π π deriving π₯π = π₯( π β 1)π£2 . . . π£π = π₯( π β 1)π€π . Suppose π π β π ( π β 1). Then exists G-tree deriving π₯π = π₯( π β 1)π = π₯( π β 1)π€π . One can easily see that the string π₯π derived by π π is π€0 . . . π€π = π€ β‘