<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Understanding Metalanguage Integration by Renarrating a Technical Space Megamodel</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vadim Zaytsev</string-name>
          <email>vadim@grammarware.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universiteit van Amsterdam</institution>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>megamodel and its different renarrations.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Megamodels are used for modelling complex systems involving many artefacts,
each of which is also in turn a model or a transformation [
        <xref ref-type="bibr" rid="ref3 ref4">3,4</xref>
        ]. For instance, they
can help represent an entire technological space or a technical space in order to
expose its components and to explain them to previously unaware audience (such
as students) [
        <xref ref-type="bibr" rid="ref10 ref6">6,10</xref>
        ]. The main focus of megamodelling is usually on externally
observable (meta)languages: communication protocols, data interchange formats,
application programming interfaces, algebraic data types, public library
interfaces, serialisation formats, etc. Yet there are a lot of (meta)language used behind
the scenes for internal presentation of data structures — and we all know very
well how much of an impact can a different data structure have on performance
of an algorithmically nontrivial application. As it turns out, megamodelling can
be very helpful here as well.
      </p>
      <p>
        Megamodels (also called linguistic architecture models [
        <xref ref-type="bibr" rid="ref10 ref6">6,10</xref>
        ],
macromodels [15], technology models, etc) come in a great variety of forms and approaches
and are theoretically useful for solving many problems of different
stakeholders. However, one of the main showstoppers is their overwhelming complexity:
not only a typical megamodel requires considerable investment in deep domain
analysis, exploratory experimentation, modelling and metamodelling; but also
the result thereof is a towering monolith easily intimidating any possible users.
At the same time, simplification is possible yet often undesirable, for the devil
lurks in the details. One of the existing solutions is investing in packaging the
megamodel as well as in its development. We can slice the megamodel into
digestible parts and navigate stakeholders through them, possibly through various
itineraries depending on the priorities — this process is referred to as megamodel
renarration [18].
      </p>
      <p>
        A renarration of a megamodel is a story that traverses the elements of this
megamodel in order to guide the users through it and to gradually introduce
them to the model elements and thus to domain concepts. Formally, a
renarration relies on operators for addition/removal, restriction/generalisation, zoom
in/zoom out, instantiation/parametrisation, connection/disconnection and can
make use of backtracking [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In prior work we have shown renarrations as
annotated megamodel transformations, but have not used them in multiple scenarios
based on one original model.
      </p>
      <p>The approach we propose in this paper involves investing in a global
megamodel of an entire technical space, and then using renarrations of it to
demonstrate each existing technology. Thus, the contribution of the paper is mainly
the focus on using one baseline white box megamodel for establishing a common
ground for explaining various subtly different technologies of the same domain
by renarrating it repeatedly.</p>
      <p>
        Specifically in the context of the GEMOC initiative, megamodelling addresses
the second focus (integration of heterogeneous model elements), while
renarration treats the first issue (catering various stakeholder concerns). So far this
material has been (in a more volatile form) used in teaching courses on
software language engineering [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], software evolution, software construction and in
supervision of Master students.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Parsing with many faces</title>
      <p>For a demonstration of the proposed approach let us consider a megamodel for
parsing in the broad sense that we presented in earlier work [21]. The model
includes twelve kinds of artefacts commonly found in software language enginering
(as well as commonly encountered mappings among them, see Figure 1):
Str — a string, a file, a byte stream.</p>
      <p>Tok — a finite sequence of strings (called tokens) which, when concatenated,
yields Str. Includes spaces, line breaks, comments, etc — collectively, layout.
TTk — a finite sequence of typed tokens, possibly with layout removed, some
classified as numbers, strings, etc.</p>
      <p>Lex — a lexical source model [19] that addes grouping to typing; in fact a
possibly incomplete tree connecting most tokens together in one structure.
For — a forest of parse trees, a parse graph or an ambiguous parse tree
with sharing; a tree-like structure that models Str according to a syntactic
definition.</p>
      <p>Ptr — an unambiguous parse tree where the leaves can be concatenated to
form Str.</p>
      <p>Cst — a parse tree with concrete syntax information. Structurally similar to
Ptr, but abstracted from layout and other minor details. Comments could
still be a part of the Cst model, depending on the use case.</p>
      <p>Ast — a tree which contains only abstract syntax information.</p>
      <p>Pic — a picture, which can be an ad hoc model, a “natural model” or a
rendering of a formal model.
TTk
Tok
Str
Cst
Ptr
For
Gra
Dra
Pic</p>
      <p>Dra — a graphical representation of a model (not necessarily a tree), a
drawing in the sense of GraphML or SVG, or a metamodel-indepenent syntax
but metametamodel-specific syntax like OMG HUTN.</p>
      <p>Gra — an entity-relationship graph, a categorical diagram or any other
primitive “boxes and arrows” level model.</p>
      <p>Dia — a diagram, a graphical model in the sense of EMF or UML, a model
with an explicit advanced metamodel.</p>
      <p>
        The megamodel from Figure 1 provides a unique uniform view on parsing,
unparsing, formatting, pretty-printing, disambiguation, visualisation and related
activities — it is a big step from heterogeneous discordant papers originating
from relevant technical spaces toward general understanding of the field. Yet,
as we have claimed before [
        <xref ref-type="bibr" rid="ref11">18,11</xref>
        ], a monolithic megamodel can play a role of a
knowledge container, but cannot be used directly as the deployed artefact. (As a
side remark, this corresponds to the claim by Bézivin et al that a megamodel as
a model of models should not be used as a reference model [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]). Hence, we need
a renarration of a megamodel to successfully deliver the knowledge behind it. A
renarration can happen naturally (e.g., as a lecture for students) or be formally
inferred with megamodel transformation operators for addition, connection,
instantiation, etc [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
TTk
Tok
Str
Lex
TTk
Tok
Str
Gra
Dra
Pic
Dia
Gra
Dra
      </p>
      <p>Pic
TTk
Tok
Str</p>
      <p>Lex
TTk
Tok
Str
Cst
Ptr
For
Ast
Cst
Ptr
For
Cst
Ptr</p>
      <p>For
Ast
Cst
Ptr
For</p>
      <p>Dia
Gra
Dra
Pic
Gra
Dra</p>
      <p>Pic
(a) lex and yacc
(b) ANTLR
(c) Rascal
(d) Iterative lexical analysis</p>
      <p>In this paper, we use English for the narrative, and the models themselves are
available at ReMoDD: http://www.cs.colostate.edu/remodd/v1/content/
renarrating-metalanguage-integration. In the following sections, we
demonstrate several renarrations of the megamodel from Figure 1.
2.1</p>
      <sec id="sec-2-1">
        <title>Parsing in a narrow sense: lex + yacc</title>
        <p>
          One of the textbook approaches to parsing is using two tools to obtain a parse
tree from the input string: one for lexical analysis and one for syntactic
analysis. In many classic compiler construction courses lexical analysis is done with
lex [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] or one of its successors. The tokens that are obtained by lexical analysis,
are in fact typed, but the type information is not necessarily used for anything,
so we can model the result of the lexical analysis with Tok. The next step is
handled by a compiler compiler like yacc [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] or its more modern counterparts (but
not too innovative — we want to stick to the classic DragonBook-like view [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]).
This syntax analysis tool consumes Tok and produces a parse tree — Ptr. This
can be seen on a rather simple Figure 2(a).
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Advanced parsing technology: ANTLR</title>
        <p>
          Consider ANTLR [14], a state of the art compiler compiler that can be used for
the same purpose as lex+yacc, but incorporates the results of several decades of
research on parsing, compiler construction and interactive programming
environments. Both a lexer and a parser are generated from the uniform syntactic
definition (grammar). Lexical nonterminals, usually written in CAPSLOCK, define
a grammar used for lexical analysis. Most of them are preterminals — their
definitions contain only terminals, combined sequentially, with disjunction, Kleene
closure and other combinators typical for regular expressions [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. As shown on
Figure 2(b), the output of the lexer is TTk, a stream of strongly typed tokens
— each token has to either belong to one of the lexical categories (be parsed
as a lexical nonterminal) or match one of the terminal symbols used in the rest
of the grammar — they are turned into preterminals automatically by ANTLR.
The untyped version of the same representation (Tok) is not available directly:
if needed, one could possibly either disregard the typing information (e.g., by
using code duplicates inside semantic actions) or plug in into the internals of the
generated lexer.
        </p>
        <p>A typed token stream is processed by a parser which ANTLR generates from
the input grammar. The result is Cst, a parse tree that abstracts from some
details like layout and comments. It is important to note that ANTLR generates
the definition of the Cst and provides means to traverse them. However, if one still
desires to use an abstract syntax tree, both Ast itself and the mapping from Cst
to Ast need to be programmed explicitly in the base language of ANTLR (Java,
C++, C#, Python, etc). The mapping can be scattered among the nonterminal
definitions directly in the grammar (as semantic actions), or it can be written
as a separate program that traverses the ANTLR Cst with the ANTLR visitor
and constructs a specific Ast. The class structure of the Ast itself always needs
to be defined and processed independently from ANTLR.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Rascal metaprogramming language</title>
        <p>
          Rascal [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] is another state of the art piece of grammarware — however, an
important difference from ANTLR is that Rascal is advertised as a “one-stop-shop”
for software analysis, transformation and visualisation. Let us try to understand
this difference from Figure 2(c).
        </p>
        <p>Rascal uses generalised parsing (more specifically, GLL), which yields a parse
forest instead of a parse tree, if the grammar is ambiguous. Such parse forests
(For) are represented internally with the same structure — a term representation
that is allowed to explicitly contain ambiguity node. Thus, in order to decide if a
given tree is For or Ptr, we need to perform a deep match on an amb(_,_) pattern
(since pattern matching is one of the basic constructions in Rascal, this operation
is trivial to express, even though it might become a performance bottleneck).</p>
        <p>By Rascal design, there is no observable distinction between Ptr and Cst. All
trees are stored internally as Ptr, but all pattern matching behaves as if both
the pattern and term is Cst (with the pattern allowed to be incomplete). Each
unambiguous tree conforms to the grammar (a syntax specification) that was
used to parse it. A grammar is defined in Rascal within the same module or
imported as a separate one. Relying on such grammatical structure can simplify
pattern matching immensely: instead of checking for a term which is an
application of a particular production rules with certain arguments, we write the same
intent down with a term on the left hand side, typed to a particular nonterminal
and thus fully conforming to its structure (modulo intended gaps to be skipped
during unification).</p>
        <p>A Cst can be mapped to Ast explicitly by writing a pattern-matching visitor,
which is done in some cases that require sophisticated compulations as a part
of the mapping. However, an easier way is to use an implode() library function
that has a set of stable heuristic rules for finding bidirectional correspondence
between a given syntax definition and a given algebraic data type. The ADT itself
(the structure of Ast) must still be programmed manually, which is traditionally
not considered to be a burden since one wants to have full control about the
way abstract syntax is defined. (When this is not the case, it can be inferred
from the grammar by grammar mutations [20] of GrammarLab, a Rascal library
for manipulating grammars in a broad sense1). implode() is not shipped with
a reverse function, so any derivation from Ast to Cst/Ptr, if needed, must be
programmed manually.</p>
        <p>High level abstract diagrams (Dia) are also modelled in Rascal by algebraic
data types managed by the (meta)programmer. A universal yet still a high level
visual model (Gra) is provided in the standard Rascal library and contains
elements like boxes, grids, graphs, trees, plots. A render() function, however,
positions all these elements automatically and only outputs the final picture
(Pic) on screen or to a file, effectively skipping over Dra — for a Rascal
programmer it means having no control over the exact positioning of most elements on
canvas, except for general constraints which are a part of the metamodel of Gra.
2.4</p>
      </sec>
      <sec id="sec-2-4">
        <title>Semiparsing: building lexical models with ILA</title>
        <p>
          Semiparsing [19] is an umbrella term for techniques of imprecise
manipulation of source code (its variations are known as agile modeling, robust
parsing, lightweight processing, error repair, etc). They are inherently very
different because usually come into existence for solving a very particular practical
problem — we have claimed recently that Boolean grammars [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] and parsing
schemata [16] can be helpful in modelling all possible variations of semiparsing.
However, as useful as these two formalisations could be in deep understanding
of the methods, relating them and positioning among themselves, they are not
always as effective for their implementation-driven comprehension, especially by
software engineering practitioners without background in formal methods.
        </p>
        <p>
          Consider Figure 2(d), which demonstrates a semiparsing technique called
iterative lexical analysis [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] (a similar technique has recently emerged in a more
modern framework called TEBA for analysis of tokenised syntax trees [17]). The
1 GrammarLab: http://grammarware.github.io/lab.
technique relies mainly on patterns which are classified in levels: the higher the
level, the more unstable and the less desirable the pattern is. So, on the first
level there are strict matches for terminals such as keywords, and on the last
level there are “desperate” heuristics that are meant more to ensure that the
process produces some kind of result than to actually claim any correctness.
Hence, we only work with the left column of our megamodel: the higher we are
in the model, the more abstract and imprecise patterns are applied. There is
no direct correspondence between pattern levels and layers of the megamodel,
but for each concrete pattern we can easily find a place. For example, a pattern
that detects strings and demotes the role of tokens inside a string from possible
metasymbols (e.g., so that a curly bracket in a = "b{"; is never used for block
identification) clearly works on TTk, while a pattern that matches an identifier
followed by a bracketed comma-separated list of identifiers followed by a block
of statements and promotes it to a function definition, naturally produces Lex.
        </p>
        <p>Operations for descending from Lex to TTk to Tok to Str are not explicitly
described in the paper about iterative lexical analysis, but are certainly available
in any sensible framework: we need to flatten (unfold) all hierarchical constructs
to get down to TTk, disregard type information to get down to Tok and
concatenate all tokens to get all the way to Str.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Conclusion</title>
      <p>
        Megamodels are used as an understanding aid in complex scenarios involving
various technologies, software languages, methodologies, approaches and
transformations [
        <xref ref-type="bibr" rid="ref3 ref4">3,4</xref>
        ]. Renarrations of megamodels improve their usefulness by guiding
megamodel consumers through the forest of immanently complicated artefacts
and mappings [
        <xref ref-type="bibr" rid="ref11">11,18</xref>
        ]. Megamodels, whether ad hoc (a sentence “model M
conforms to a metamodel MM” is in fact a tiny megamodel) or formal (AMMA,
MEGAF, SPEM, MCAST, MegaL, CT), perform undeniably well for teaching
purposes when introducing students to a new technology and explaining subtle
differences between two almost identical technologies. In this paper, we have
claimed that the same approach can be used for internal “languages”, the ones
that are hiding behind the scenes inside our tools. For this purpose, we propose
to have one baseline megamodel of the domain — in a formal sense, it will include
a lot of abstract entities, unbounded elements, constraints based on roles, etc —
and use its refined renarrations for each of the concrete technologies that need
to be explained and understood. We have demonstrated this approach with our
megamodel for parsing in a broad sense [21], which we have used as a baseline
model for four renarrations: classic lex+yacc parsing [
        <xref ref-type="bibr" rid="ref1 ref12 ref7">1,7,12</xref>
        ], ANTLR language
workbench [14], Rascal one-stop-shop [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and iterative semiparsing [
        <xref ref-type="bibr" rid="ref5">5,17,19</xref>
        ].
      </p>
      <p>Beside the obvious future work claims such as promises of (mega)modelling
different domains and perhaps even megamodelling relations among such
domains, some other open questions remain. For example, some megamodels
require explicit distinction between kinds of mappings they express (injective,
bijective, monomorphic, isomorphic, asymmetric bidirectional, symmetric
bidirectional, etc), and such distinctions would also have to be properly specified and
renarrated. In other cases, the modelling framework may already have a
metamodel suitable for expressing typical renarrations, and the megamodel
navigating arsenal would need to be adjusted with respect to the language it must be
expressed in (instead of the opposite situation which we always assume).</p>
      <p>As any other modelling method which introduces unification and
heterogenuity, (mega)modelling different technologies with renarrations of the same baseline
megamodel can help not only in explaining the actual state of the art, but also
in spotting singularities. Anything irregular could be a signal of a bug, a not yet
implemented feature or a comprehension mistake. Why is there a mapping from
Cst to Ast in Rascal but no universtal mapping from Ast to Cst? Perhaps we
should include one! Is there a good reason for Dra to not be accessible in Rascal?
Having it explicitly as a (possibly optional) first class entity could allow us to do
things we otherwise cannot! Would it help organising patterns for ILA/TEBA
based not on their “desperation”, but on the kind of artefacts they are actually
dealing with (untyped tokens, typed tokens, grouped tokens)? Exploration of the
extent of usefulness of such conclusions remains future work.
14. T. Parr. The Definitive ANTLR Reference: Building Domain-Specific Languages.</p>
      <p>Pragmatic Programmers. Pragmatic Bookshelf, first edition, May 2007.
15. R. Salay, J. Mylopoulos, and S. Easterbrook. Using Macromodels to Manage
Collections of Related Models. In CAiSE, pages 141–155. Springer, 2009.
16. K. Sikkel. Parsing Schemata — a Framework for Specification and Analysis of</p>
      <p>Parsing Algorithms. Springer, 1997.
17. A. Yoshida and Y. Hachisu. A Pattern Search Method for Unpreprocessed C</p>
      <p>Programs based on Tokenized Syntax Trees. In SCAM, 2014.
18. V. Zaytsev. Renarrating Linguistic Architecture: A Case Study. In MPM 2012,
pages 61–66. ACM, Nov. 2012.
19. V. Zaytsev. Formal Foundations for Semi-parsing. In CSMR-WCRE 2014 ERA,
pages 313–317. IEEE, Feb. 2014.
20. V. Zaytsev. Software Language Engineering by Intentional Rewriting. EC-EASST;</p>
      <p>Software Quality and Maintainability, 65, Mar. 2014.
21. V. Zaytsev and A. H. Bagge. Parsing in a Broad Sense. In MoDELS, volume 8767
of LNCS, pages 50–67. Springer, Oct. 2014.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Aho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sethi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Ullman</surname>
          </string-name>
          . Compilers: Principles, Techniques and Tools. Addison-Wesley,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Bagge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Lämmel</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Zaytsev</surname>
          </string-name>
          .
          <article-title>Reflections on Courses for Software Language Engineering</article-title>
          .
          <source>In Tenth Educators Symposium (EduSymp</source>
          <year>2014</year>
          ),
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>J.</given-names>
            <surname>Bézivin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gérard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.-A.</given-names>
            <surname>Muller</surname>
          </string-name>
          , and
          <string-name>
            <surname>L. Rioux.</surname>
          </string-name>
          <article-title>MDA components: Challenges and Opportunities</article-title>
          .
          <source>In Proceedings of the First International Workshop on Metamodelling for MDA</source>
          , pages
          <fpage>23</fpage>
          -
          <lpage>41</lpage>
          , Nov.
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>J.</given-names>
            <surname>Bézivin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Jouault</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Valduriez</surname>
          </string-name>
          .
          <article-title>On the Need for Megamodels</article-title>
          .
          <source>OOPSLA &amp; GPCE, Workshop on best MDSD practices</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>A.</given-names>
            <surname>Cox</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Clarke</surname>
          </string-name>
          .
          <article-title>Syntactic Approximation Using Iterative Lexical Analysis</article-title>
          .
          <source>In Proceedings of the International Workshop on Program Comprehension (IWPC</source>
          <year>2003</year>
          ), pages
          <fpage>154</fpage>
          -
          <lpage>163</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>J.-M. Favre</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Lämmel</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Varanovich</surname>
          </string-name>
          .
          <article-title>Modeling the Linguistic Architecture of Software Products</article-title>
          . In MoDELS, pages
          <fpage>151</fpage>
          -
          <lpage>167</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Johnson</surname>
          </string-name>
          .
          <article-title>YACC-Yet Another Compiler Compiler</article-title>
          .
          <source>Computer Science Technical Report 32</source>
          , AT&amp;T Bell Laboratories,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Kleene</surname>
          </string-name>
          .
          <article-title>Representation of Events in Nerve Nets and Finite Automata</article-title>
          .
          <source>Automata Studies</source>
          , pages
          <fpage>3</fpage>
          -
          <lpage>42</lpage>
          ,
          <year>1956</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>P.</given-names>
            <surname>Klint</surname>
          </string-name>
          , T. van der Storm, and
          <string-name>
            <surname>J. Vinju. EASY</surname>
          </string-name>
          <article-title>Meta-programming with Rascal</article-title>
          .
          <source>In GTTSE</source>
          <year>2009</year>
          , volume
          <volume>6491</volume>
          <source>of LNCS</source>
          , pages
          <fpage>222</fpage>
          -
          <lpage>289</lpage>
          . Springer, Jan.
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>R.</given-names>
            <surname>Lämmel</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Varanovich</surname>
          </string-name>
          .
          <article-title>Interpretation of Linguistic Architecture</article-title>
          .
          <source>In ECMFA</source>
          , pages
          <fpage>67</fpage>
          -
          <lpage>82</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>R.</given-names>
            <surname>Lämmel</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Zaytsev</surname>
          </string-name>
          .
          <article-title>Language Support for Megamodel Renarration</article-title>
          .
          <source>In XM</source>
          <year>2013</year>
          , volume
          <volume>1089</volume>
          <source>of CEUR</source>
          , pages
          <fpage>36</fpage>
          -
          <lpage>45</lpage>
          . CEUR-WS.org, Oct.
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Lesk. LEX-A Lexical Analyzer</surname>
          </string-name>
          <article-title>Generator</article-title>
          .
          <source>Computer Science Technical Report 39</source>
          , AT&amp;T Bell Laboratories,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>A.</given-names>
            <surname>Okhotin</surname>
          </string-name>
          .
          <article-title>Conjunctive and Boolean Grammars: The True General Case of the Context-Free Grammars</article-title>
          . Computer Science Review,
          <volume>9</volume>
          :
          <fpage>27</fpage>
          -
          <lpage>59</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>