<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>SLE 2012 Doctoral Symposium at the 5th International Conference on Software Language Engineering</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ulrich W. Eisenecker Christian Bucholdt (Eds.)</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dresden</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Germany</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2009</year>
      </pub-date>
      <volume>2304</volume>
      <fpage>128</fpage>
      <lpage>142</lpage>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>http://ceur-ws.org/
CEUR Workshop Proceedings (CEUR-WS.org) is a publication service of Sun SITE Central Europe operated under the umbrella of RWTH</p>
      <p>Aachen University with the support of Tilburg University. CEUR-WS.org is a recognized ISSN publication series, ISSN 1613-0073.
Copyright © 2012 for the individual papers by the papers' authors. Copying permitted for private and
academic purposes. This volume is published and copyrighted by its editors.
This volume contains the proceedings of the Doctoral Symposium at the 5th International
Conference on Software Language Engineering, 25th of September 2012, hosted by the
Faculty of Information Science of the Technical University of Dresden, Germany. Previous
editions were held in Braga, Portugal (2011), Eindhoven, Netherlands (2010), Colorado,
USA (2009) and Toulouse, France (2008). The International Conference on Software
Language Engineering (SLE) aims to bring together the di erent sub-communities of the
software-language-engineering community to foster cross-fertilisation and to strengthen
research overall. Within this context the Doctoral Symposium at SLE 2012 contributes
towards these goals by providing a forum for both early and late-stage PhD students to
present their research and get detailed feedback and advice from researchers both in and
out of their particular research area.</p>
      <p>The Program Committee of the Doctoral Symposium at SLE 2012 received 10
submissions. We would like to thank all authors for submitting their papers. Each paper was
reviewed by at least three reviewers. Based on the review reports and intensive discussions
conducted electronically, the Program Committee selected 8 regular papers.We would like
to thank the Program Committee members and all reviewers for their e orts in the
selection process.</p>
      <p>In addition to contributed papers, the conference program includes a keynote. We are
grateful to Dr. Ste en Grei enberg, University of Cottbus, Germany, for accepting our
invitation to address the symposium.</p>
      <p>We also would like to thank the members of the Steering Committee, the Organising
Committee as well as all the other people whose e orts contributed to make the
symposium a success.</p>
      <p>The support of our industrial sponsors is essential for SLE 2012. We cordially express
our gratitude to (in alphabetical order)
Doctoral Symposium at SLE 2012 Organization</p>
    </sec>
    <sec id="sec-2">
      <title>Program-Co-Chairs:</title>
    </sec>
    <sec id="sec-3">
      <title>Local Organization: Program Committee:</title>
      <sec id="sec-3-1">
        <title>Ulrich W. Eisenecker (University of Leipzig, Germany )</title>
        <p>Christian Bucholdt (Plauen, Germany )
Birgith Demuth (Technical University of Dresden, Germany )
Sven Karol (Technical University of Dresden, Germany )</p>
      </sec>
      <sec id="sec-3-2">
        <title>Ste en Becker (University of Paderborn, Germany ) David Benavides (University of Seville, France)</title>
        <p>Mark van den Brand</p>
        <p>(University of Technology Eindhoven, Netherlands )
Sebastian Gunther (Vrije University Brussels, Belgium )
Michael Haupt (Oracle, Germany )
Arnaud Hubaux (PReCISE, University of Namur, Belgium )
Jaako Jarvi (Texas A &amp; M University, USA)
Christian Kastner (Philipps University Marburg, Germany )
Jorg Liebig (University of Passau, Germany )
Roberto Lopez Herrejon</p>
        <p>(Johannes Kepler University of Linz, Austria )
Johannes Muller (University of Leipzig, Germanyl )
Oscar Nierstraszr (University of Bern, Switzerland )
Zoltan Porkolab (Eotvos Lorand University, Hungary )
Jaroslav Poruban (University of Kosice, Slovakia)
Rick Rabiser (Johannes Kepler University of Linz, Austria )
Gunther Saake (University of Magdeburg, Germany )
Michal Valentai</p>
        <p>(University of Technology Prague, Czech Republic)
Valentino Vranic</p>
        <p>(University of Technology Bratislava, Slovakia )
Heike Wehrheim (University of Paderborn, Germany )</p>
        <sec id="sec-3-2-1">
          <title>Methodology as Theories in Business Informatics</title>
          <p>[Keynote Abstract]
Dr. Steffen Greiffenberg</p>
          <p>semture GmbH</p>
          <p>Dresden, Germany
steffen.greiffenberg@semture.de
Since its existence business informatics endeavors to establish itself as a
science and to create unique characteristics towards pure computer science. In this
keynote theory requirements are outlined and proved as necessities for a science.
Furthermore, methods for the development of business information systems as
possibilities for theories in business informatics are proposed.</p>
          <p>The explication of study designs within business informatics is currently hardly
practiced. Thereby, problems regarding objective, replicability and validity of
research findings may occur. Those can reflect in the following questions: What
is the purpose of this model? Why does the reference model looks just so? What
is the aspiration of this model and how can it be verified?
This keynote presumes that the reason for this insufficient explication is an
inadequate assistance for the researchers task. Thus, the keynotes target is to draft
a method for a concept of study designs in conceptual modeling research.
Combined with this method is the hope that the researcher will be equipped with
the skills to facilitate the explication of a study design.</p>
          <p>Bio
Dr. Steffen Greiffenberg is a visiting professor at the Technical University of
Cottbus and managing partner of the semture GmbH in Dresden. The semture
GmbH is building software modeling products.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>Interoperability of Software Engineering</title>
        </sec>
        <sec id="sec-3-2-3">
          <title>Metamodel: Lessons Learned</title>
          <p>Muhammad Atif Qureshi
School of Software, Faculty of Engineering and IT, University of Technology, Sydney,</p>
          <p>Australia
Abstract. Use of models and modelling languages in software
engineering is very common nowadays. To formalize these modelling languages,
many metamodels have been proposed in the software engineering
literature as well as by standard organizations. Interoperability of these
metamodels has emerged as a key concern for their practical usage. We
have developed a framework for facilitating metamodel interoperability
based on schema matching and ontology matching techniques. In this
paper we discuss not the techniques used but rather we focus on the
lessons we have learned by applying the framework on several pairs of
metamodels for finding similarities between them. We have highlighted
some areas where these techniques can be beneficial and also pointed out
some limitations of these techniques in this domain.
1</p>
          <p>
            Problem Description and Motivation
Many metamodels have been proposed in different domains of software
engineering such as process [
            <xref ref-type="bibr" rid="ref1 ref2">1</xref>
            ], product [
            <xref ref-type="bibr" rid="ref3">2</xref>
            ], metrics [
            <xref ref-type="bibr" rid="ref4">3</xref>
            ] and programming [
            <xref ref-type="bibr" rid="ref5">4</xref>
            ]. Most of
these metamodels have been developed independently of each other with shared
concepts being only accidental. These metamodels are evolving continuously and
many versions of these metamodels have been introduced over the years. This
evolution has extended not only the scope but their size [
            <xref ref-type="bibr" rid="ref6">5</xref>
            ] and complexity
as well. The need to formulate a way in which these metamodels can be used
in an interoperable fashion has emerged as a key issue in the practical usage
of these metamodels. There are several benefits of such interoperability
including: reduced joint complexity, ease of understanding and use for newcomers,
portability of models across modelling tools and better communication between
researchers [
            <xref ref-type="bibr" rid="ref7">6</xref>
            ]. This overall need is also emphasized by the software engineering
community [
            <xref ref-type="bibr" rid="ref8">7</xref>
            ] and further endorsed by the rise of industry interest as well as
various conferences and workshops on the topic [
            <xref ref-type="bibr" rid="ref9">8</xref>
            ]. To have interoperability
between any pair of metamodels, similarities between the elements of metamodels
need to be identified. This is undertaken by a matching technique as yet
little utilized for metamodels although widely used in ontology engineering. Close
similarity between metamodels and ontologies [
            <xref ref-type="bibr" rid="ref8">7</xref>
            ],[
            <xref ref-type="bibr" rid="ref10">9</xref>
            ],[
            <xref ref-type="bibr" rid="ref11">10</xref>
            ] suggests that it should
be efficacious to adopt ontology matching techniques for facilitating meta-model
interoperability with a first step of linguistic matching. Indeed, ontologies are
also helpful in reducing semantic ambiguity [
            <xref ref-type="bibr" rid="ref10">9</xref>
            ], helping not only to improve the
semantics of a metamodel [
            <xref ref-type="bibr" rid="ref11">10</xref>
            ] but also providing a potential way in which these
meta-models can be bridged with each other to be interoperable. A framework
[
            <xref ref-type="bibr" rid="ref12">11</xref>
            ] for facilitating interoperability of metamodels has been developed based on
the ontology merging and schema matching techniques. The frame-work was
applied to several pairs of metamodels including OSM [
            <xref ref-type="bibr" rid="ref13">12</xref>
            ], BPMN [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ], SPEM
[
            <xref ref-type="bibr" rid="ref1 ref2">1</xref>
            ] and some multi agent systems (MAS) metamodels. In this paper we discuss
the lessons learned by applying the framework on these metamodels. We have
highlighted the areas of metamodel interoperability that can be assisted by using
these techniques as well as discussing some of their limitations. In Section 2 we
briefly present our framework for metamodel interoperability. Section 3 presents
the lessons learned during the application of this framework to several
metamodels, followed by a conclusion and summary of likely future work (Section
4).
2
          </p>
          <p>Proposed Solution</p>
          <p>
            The framework for metamodel interoperability is depicted in Fig. 1 as a
BPMN diagram. The framework has two major activities: Linguistic Analysis
and Ontological Analysis. These are further divided into subactivities, as
represented in the digram. While trying to make metamodels interoperable using
this framework, we assume that there exists some commonality between a pair
of metamodels. It is necessary to identify the potential common concepts
(conceptual elements) that can be shared between two metamodels. The detailed
discussion on this framework is not our focus in this paper but can be found
in [
            <xref ref-type="bibr" rid="ref12">11</xref>
            ]. The overall similarity of any pair of elements is based on the three
different types of similarities among them: syntactic, semantic and structural. In
applying the framework to a variety of metamodels, several thousand different
permutations were computed for the comparison of the metamodel elements. The
following sections elaborate our experience of using this framework and discuss
the lessons we have learned during the experiment.
3
          </p>
          <p>Lessons Learned: Limitations and Opportunities</p>
          <p>
            Syntactic Matching
Opportunities: Syntactic matching between a pair of metamodels is based on a
comparison between the names of the conceptual elements within those
metamodels. Different techniques in the literature are available that can be used
for such comparison. One such technique is known as string-edit distance of
simply edit distance (ED) [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ], which counts the number of token insertions,
deletions and substitutions needed to transform one lexical string S1 to another
S2, viewing each string as a list of tokens. For example the value of ED for two
strings Brian and Brain is 2. Various other techniques for string comparison are
used in different domains e.g. N-gram, Morphological Analysis (Stemming), and
Stop-Word Elimination. ED can be used then to calculate the syntactic
similarity (SSM) between a pair of elements [
            <xref ref-type="bibr" rid="ref16">15</xref>
            ]. Lessons Learned: These techniques
can be useful in comparing the elements with-in the same domain e.g. domain
ontologies; where elements with the same name have (most of the times) the
same meaning. The problem with these techniques in the context of
metamodels is that they are not effective when applied standalone. Our experience with
metamodel matching shows that considering only syntactic similarity measures,
isolated from their semantics, creates misunderstanding by expressing the same
meanings in different terms. For example confirmation and notification has
approximately 60
3.2
          </p>
          <p>
            Matching the Semantics
Metamodels are generally treated as a model of a modelling language [
            <xref ref-type="bibr" rid="ref17">16</xref>
            ],
[
            <xref ref-type="bibr" rid="ref18">17</xref>
            ],[18][19]. These modelling languages are designed (mostly) for specific
domains. Therefore, we believe that to compare the semantic similarity of
metamodel elements, it is important to consider both perspectives: linguistic and
ontological. The linguistic semantics involves checking the semantics of the
metamodel elements from that modelling languages perspective e.g. their properties
(attributes), types of attributes and to some extent their behaviour as well. On
the other hand, on-tological semantics means finding the elements that have the
same meaning but may have been presented with different names.
Opportunities: Techniques for comparing class diagrams e.g. [20],[21] can be utilized to find
the similarities between metamodel elements, especially for the metamodels that
are represented using object-oriented classes (meta-classes) e.g. OMGs family of
meta-models. Different approaches in the area of computational linguistics and
natural language processing can be used to find ontological semantic similarity
e.g. finding the synonyms of a given conceptual element of one metamodel and
looking for those synonyms in the second metamodel. Synonyms can be found
using any lexical data-base e.g. a dictionary. WordNet [21] is one lexical database
that can be used for finding synonyms and word senses. WordNet is a registered
trademark for Princeton University and contains more than 118,000 word forms
and 90,000 different word senses. Lessons Learned: We have observed that
finding ontological semantic similarity is very important as there are so many such
conceptual elements in metamodels presented with different names. For
example, Person in OSM [
            <xref ref-type="bibr" rid="ref13">12</xref>
            ] can be semantically matched with the Human Performer
in BPMN [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ]; although both have low syntactic similarity. Beside synonyms,
hyponyms (sub-name, e.g. sugar-maple and maple), hypernyms (supername, e.g.
step and footstep) can also be used to find semantic relevant elements, but none
of these are considered so far in any technique. Similarly, meronyms (part-name,
e.g. ship and fleet) and holonyms (whole-name, e.g. face and eye) can also be
useful to find these similarities. Another problem is how to combine both
linguistic and ontological semantic similarity for a pair of conceptual elements. Which
one of them is more important and how much weight should be assigned to each
of them is still unaddressed.
3.3
          </p>
          <p>
            Comparing the Structures
Besides their level of abstraction, a metamodel is treated as a (conceptual) model
of a language [22]. For a good similarity comparison between any pair of
conceptual models, not only their syntax and semantics but also their structure
should be compared. Opportunities: Different techniques have been proposed in
the literature for structural similarity of conceptual models. Some of these [22],
[
            <xref ref-type="bibr" rid="ref15">14</xref>
            ] compare the structure of business process models, whilst others [23],[24] are
for matching the structure of conceptual models based on graph theory. An
alternative to a graph matching technique is the schema matching techniques [24],
[25][26][27][28]. In this technique, the structural similarity of two conceptual
elements C1 and C2 is calculated based on their structural neighbours - ancestors,
siblings, immediateChilds, and leafs. These partial similarities are then
calculated by mean values of all the neighbouring concepts. Lessons Learned: The
techniques used to compare the structure of business process models (e.g. [22],
[
            <xref ref-type="bibr" rid="ref15">14</xref>
            ]) cannot be generalized for metamodels as business process models are
behavioural models while metamodels represents the structural aspect. Converting
the conceptual models to graphs [23], [24] and then applying graph matching
algorithms to find the structural similarity between them is not a trivial task.
To apply such a graph matching technique, we have to be very careful in the
conversion of a class diagram into a graph. True replacement of relationships among
classes (e.g. association, generalization, aggregation, composition) into
relationships among nodes of a graph (e.g. directed/undirected, weighted/unweighted)
is not straightforward. Another barrier for the application of such techniques is
that most of the metamodels in the software engineering literature are specified
using diagrams, tables and textual explanation. Having a single class diagram
for such a huge metamodel is not easy. Techniques based on the planar graph
theory like [24] are also not feasible for meta-models because of the basic
principle of planar graphs (having no cross edges). Meta-models with a rich set of
constructs (classes) like UML can easily violate this rule as it is very difficult to
convert class diagrams of these metamodels to graphs without any cross edges.
The complexity of these graph matching techniques, as also mentioned by some
authors [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ], is another barrier to their application in the domain of
metamodels, hence making it difficult to apply in practice. Based on the experience of
applying these techniques to metamodels, we recommend that we dont need to
compare the leaves of any conceptual element in a meta-model. Comparing leaf
classes of a given class (conceptual element) only results in low similarity. Also,
we think that rather than comparing all the ancestors of a conceptual element,
it is better to compare only parent classes of that element.
3.4
          </p>
          <p>
            Automation
Considering the size and complexity of metamodels [
            <xref ref-type="bibr" rid="ref6">5</xref>
            ], it is very convenient to
have tool support for matching the similarity of metamodels. Hence, our
experience with the matching of metamodels shows that, beside partial tool support,
complete automated metamodel matching is not possible. Opportunities:
Automation in syntactic matching of metamodels elements can be achieved by
implementing ED (Edit Distance) and SSM (Syntactic Similarity Measure)
algorithms using available online calculators for ED and APIs. The ontological
semantics of metamodel elements can be matched automatically using lexical
databases like WordNet, MS Office Thesaurus and other APIs available. Lessons
Learned: Complete automation for metamodel similarity matching, especially for
structural similarity, requires well formed formal definitions of metamodels that
can be used as an input for any automated tool. Unfortunately, besides XML
definitions for some of the metamodels (OMG metamodels with XMI definitions),
metamodels lacks a formal specification and are mostly specified using a
combination of textual descriptions, tables and class diagrams. Another important
barrier in the complete automation is that coefficients in the equations we used
do not have any fixed values and have to have value assigned by the domain
expert at the time of the matching. Also, the ontological semantic similarity
analysis requires the experts intellectual input to decide whether two conceptual
elements are equal or not.
3.5
          </p>
          <p>
            Refactoring
Lessons Learned: Most of the metamodels have two orthogonal forms of
conceptual elements: linguistic and ontological (as also highlighted by [
            <xref ref-type="bibr" rid="ref18">17</xref>
            ]). The
former represent the language definition while the latter describe what concepts
exist in a certain domain and what properties they have. These two types of
elements are mingled with each other in most of the metamodels and there is no
explicit boundary between them. An important consideration regarding
metamodel matching is to separate these two types of elements; we call it refactoring.
          </p>
          <p>
            Metamodels need to be first refactored before matching can occur. This
refactoring is required to remove the conceptual elements in metamodels that are not
related to the domain of interest. Rather, most of these elements are linguistic
and are present in order to maintain (glue) the structure of metamodels. For
example, Resource Parameter Binding and Parallel Gateway in BPMN [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ] are
the concepts that are related to the language definition of BPMN and are not
worth matching with any other metamodel of the same domain since every
metamodel has its own language definitional elements. Rather, it is better to match
the conceptual elements that are related to the domain of interest e.g. matching
Activity in BPMN [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ] with Activity in SPEM [
            <xref ref-type="bibr" rid="ref1 ref2">1</xref>
            ], which are more related to
the common domain of interest: Workflows and Processes.
3.6
          </p>
          <p>
            Ontology Oriented Metamodels
Lessons Learned: Our experience of matching metamodels showed that there
is a high heterogenity between the ontological elements of metamodels.
However, it has been observed that a major reason for that heterogeneity is the
lack of a common ontology or taxonomy. Much better results in interoperability
of metamodels can be achieved if metamodels share some common ontology or
taxonomy of the domain of interest; as also highlighted by [
            <xref ref-type="bibr" rid="ref9">8</xref>
            ]. The use of a
common ontology for designing/redesigning metamodels can result in better
interoperability. For example, the use of the UFO (Unified Foundation Ontology)
to redesign UML [29]. Metamodels based on a common ontology will reduce
the differences of similarity matchings, especially in syntactic and ontological
semantics matching.
4
          </p>
          <p>Conclusion
In this paper we have discussed some of the limitations and opportunities in
the field of metamodel interoperability. These recommendations are based on
the application of a framework that we have developed and applied on several
metamodels to find their similarities. We have come to conclude that, for better
similarity findings, not only the syntax but also the semantics and structure
of metamodel elements should be matched. Metamodels needs to be refactored
to separate out the ontological elements before matching for more pragmatic
results. To avoid the problems of syntactic and semantic ambiguities between
the elements, we recommend that metamodels should be based (or at least
utilize) upon some common domain ontology. Also we have shown that complete
automation of matching metamodel elements is not possible and does require
substantial human intervention.
27. Filipe, J., Cordeiro, J., Sousa, J., Lopes, D., Claro, D.B., Abdelouahab, Z. In:
A Step Forward in Semi-automatic Metamodel Matching: Algorithms and Tool.
Volume 24 of Lecture Notes in Business Information Processing. Springer Berlin
Heidelberg (2009) 137–148
28. Lopes, D., Hammoudi, S., de Souza, J., Bontempo, A.: Metamodel matching:</p>
          <p>Experiments and comparison (Oct. 2006 2006)
29. Guizzardi, G., Wagner, G. In: Using the Unified Foundational Ontology (UFO) as
a Foundation for General Conceptual Modeling Languages. Springer-Verlag (2010)</p>
        </sec>
        <sec id="sec-3-2-4">
          <title>Aspect-Oriented Language Mechanisms for</title>
        </sec>
        <sec id="sec-3-2-5">
          <title>Component Binding</title>
          <p>Kardelen Hatun, Christoph Bockisch, and Mehmet Ak¸sit</p>
          <p>TRESE, University of Twente
7500AE Enschede</p>
          <p>The Netherlands
http://www.utwente.nl/ewi/trese/
{hatunk,c.m.bockisch,aksit}@ewi.utwente.nl
Abstract. Domain Specific Languages (DSLs) are programming
languages customized for a problem/solution domain, which allow
development of software modules in high-level specifications. Code generation is
a common practice for making DSL programs executable: A DSL
specification is transformed to a functionally equivalent GPL (general-purpose
programing language) representation. Integrating the module generated
from a DSL specification to a base system poses a challenge, especially
in a case where the DSL and the base system are developed
independently. In this paper we describe the problem of integrating
domainspecific modules to a system non-intrusively and promote loose coupling
between these to allow software evolution. We present our on-going work
on aspect-oriented language mechanisms for defining object selectors and
object adapters as a solution to this problem.
1</p>
          <p>Introduction
Complex systems are created by assembling software components of various types
and functions. Reuse is essential and components created for a system are
required to continue working after the system has evolved. Some components may
be domain-specific, meaning their structure and functionality can be defined
using the fundamental concepts of the relevant domains. A domain-specific
language (DSL) provides expressive power over a particular domain. It allows
software development with high-level specifications; if general-purpose programming
languages are used, development may take a considerable programming effort.</p>
          <p>
            The specifications written in a DSL can be processed in various ways. These
are comprehensively described in [
            <xref ref-type="bibr" rid="ref5">4</xref>
            ] and [
            <xref ref-type="bibr" rid="ref4">3</xref>
            ]. Generative programming [
            <xref ref-type="bibr" rid="ref3">2</xref>
            ] is one
of the processing options and has become highly popular with the emergence
of user-friendly language workbenches. Most language workbenches provide a
means to develop a compiler for the DSL, facilitating code generation in
generalpurpose languages. (A comparison matrix for language workbenches can be found
in [
            <xref ref-type="bibr" rid="ref1 ref2">1</xref>
            ].)
          </p>
          <p>In this paper we focus on the integration of components into target systems.
“Component” is a very general concept and it can be realized in different forms,
depending on the system. We particularly focus on a subset of components,
domain-specific components, which are instances of domain-specific meta-models.
The component structure is described with a DSL and the semantics are
embedded into code generation templates, which are used to generate a component
that is tailored towards a base system’s requirements.</p>
          <p>Integrating a generated component into a system poses three main challenges.
(1) When adding unforeseen functionality to a system, no explicit hooks exist for
attaching the generated component. In this case it may be necessary to modify
the generated code, the system code or both to make the connection, which
will expose the system developer to the implementation details of the generated
code. (2) The interfaces of the generated component and the target system should
be compatible to work together, which is generally not the case. Then one of
the interfaces should be adapted, possibly by modifying the system’s or the
component’s implementation or their type-system. (3) When the component or
the target system evolves, the links between them must be re-established.</p>
          <p>Current aspect-oriented languages offer mechanisms to modularly implement
solutions for the first challenge. It can be solved by defining pointcuts that are
used as hooks to a system. The second challenge is our main focus. Existing
AO-languages offer limited mechanisms for implementing adapters between
interfaces. AspectJ inter-type declarations can be used to make system classes to
implement appropriate interfaces, however this approach is type-invasive.
CaesarJ offers a more declarative approach with wrappers, but their instantiation
requires pointcut declarations or they should be explicitly instantiated in the
base system. The links mentioned in the third challenge are the adapter
implementations mentioned in the second challenge and they represent the binding
between two components. However current AO languages do not offer a
declarative way for describing such a binding; an imperative programming language
will lead to less readable and less maintainable implementation, which is fragile
against software evolution.
2</p>
          <p>Approach
In order to overcome the shortcomings of the existing approaches we intend to
design a declarative way of implementing object adapters which is used together
with a specialized pointcut for selecting objects. The object adapter pattern is
common practice for binding two components that have incompatible interfaces.
Our approach is aspect-oriented and it will provide the means to non-intrusively
define and instantiate object adapters, inside aspects. These adapters represent
links between the component and the system; their declarative design requires
a declarative way of selecting the adaptee objects.</p>
          <p>In order to select objects to be adapted, we have designed a new pointcut
mechanism called instance pointcut which selects sets of objects based on the
execution history. An instance pointcut definition consists of three parts: an
identifier, a type which is the upper bound for all objects in the selected set, and a
specification of relevant objects. The specification utilizes pointcut expressions to
with the properties PK attached to the relations specified by A, which must hold
on the joined model k = mˆ onl nˆ with l ∈ A. The notation m |= PM means that
the properties PM hold in model m, that is, m is in conformance with M.
m ∈ M −p−ar−s−in→g mˆ ∈ M,
mˆ |= PM</p>
          <p>τ nˆ ∈ N ,
−−→ nˆ |= PN , −p−re−t−ty−-p−r−in−ti−n→g n ∈ N</p>
          <p>k |= PK</p>
          <p>An MDD process for the analysis of multimedia documents would refine the
process of Figure 2 by understanding M as the metamodel of a multimedia
authoring language, such as NCL, and N as the metamodel of the specification
language of a formal verification framework such as the specification language
of a model checker.</p>
          <p>
            This work proposes a transformation contract approach for the analysis of
multimedia documents. Different verification techniques shall be used to analyze
multimedia documents: (i) Consistency reasoning with description logic [
            <xref ref-type="bibr" rid="ref1 ref2">1</xref>
            ] will
be used for verifying document consistency together with Object Constraint
Language (OCL) invariant execution; and (ii) Linear Temporal Logic model checking
appears to be the appropriate reasoning technique for behavioral properties of
multimedia documents.
          </p>
          <p>This work contributes with a general framework, with tool support, capable
of analyzing different types of multimedia documents using different analysis
(that is, verification and validation) techniques. Our proposal uses a
languagedriven approach where the authoring language semantics is represented by a
general model (called SHM - Simple Hypermedia Model) where structural and
behavioral properties are verified. In this paper we outline our approach and
discuss preliminary results achieved with a prototype of the tool.</p>
          <p>The remainder of this paper is organized as follows. Section 2 presents the
state-of-the-art on multimedia document analysis. Section 3 discusses the
proposed solution for multimedia document analysis. Section 4 discusses the current
state of the multimedia document analysis project illustrating preliminary
results. Section 5 finishes this paper presenting the next steps of this work.
2</p>
          <p>
            State-of-the-art on multimedia document analysis
Santos et al. [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ] presented an approach for the analysis of multimedia documents
by translating it into a formal specification, in that case, into RT-LOTOS
processes, using general mapping rules. The modularity and hierarchy of RT-LOTOS
allows the combination of processes specifying the document presentation with
other processes modeling the available platform.
          </p>
          <p>
            The verification consists in the interpretation of the minimum reachability
graph built from the formal specification, in order to prove if the action
corresponding to the presentation end can be reached from the initial state. Each
node in the graph represents a reachable state and each edge, the occurrence of
an action or temporal progression. When a possible undesired behavior is found,
the tool returns an error message to the author, so he can repair it. The tool
in [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ] could analyze NCM [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ] and SMIL [
            <xref ref-type="bibr" rid="ref16">15</xref>
            ] documents.
          </p>
          <p>
            Na and Furuta, in [
            <xref ref-type="bibr" rid="ref13">12</xref>
            ], presented caT (context aware Trellis), an authoring
tool based on Petri nets. caT supports the analysis of multimedia documents by
building the reachability tree of the analyzed document. The author defines limit
values for the occurrence of dead links (transitions that may not be triggered),
places with token excess, besides other options, as the analysis maximum time.
The tool investigates the existence of a terminal state, i.e., whether there is
a state where no transitions are triggered. It also investigates the limitation
property, i.e., if no place in the net has an unlimited number of tokens and
the safeness property, i.e., if each place in the net has a token. The limitation
analysis is important since tokens may represent scarce system resources.
          </p>
          <p>
            Oliveira et al., in [
            <xref ref-type="bibr" rid="ref8">7</xref>
            ], presented HMBS (Hypermedia Model Based on
Statecharts). An HMBS multimedia application is described by a statechart that
represents its structural hierarchy, regarding nodes and links, and its
humanconsumable components. Those components are expressed as information units,
called pages and anchors. The statechart execution semantics provide the
application navigation model. A statechart state is mapped into pages and
transactions and events represent a set of possible link activations.
          </p>
          <p>The statechart reachability tree for a specific configuration may be used to
verify if any page is unreachable, by verifying the occurrence of a state s in one
of the generated configurations, which indicate that the page is visible when the
application navigation starts in the initial state considered. In a similar manner,
it is possible to determine if a certain group of pages may be seen simultaneously
searching state configurations containing the states associated to those pages.
The reachability tree also allows the detection of configurations from which no
other page may be reached or that present cyclical paths.</p>
          <p>
            Ju´nior et al., in [
            <xref ref-type="bibr" rid="ref11">10</xref>
            ], also present the verification of NCL documents through a
model-driven approach. The verification is also achieved by transforming an NCL
document into a Petri Net. This transformation is done in two steps. The first
step transforms the NCL document into a language called FIACRE, representing
the document as a set of components and processes. Components represent media
objects and compositions and processes represent the behavior associated to
components. The second step transforms the FIACRE representation into a Petri
Net. The verification uses a model-checking tool and temporal logic formulae to
represent the behavior the author wants to verify. Once this work is very recent,
the automation of that approach is a future work.
          </p>
          <p>Our work contributes to the state-of-the-art with a general approach that
can be used with different multimedia authoring languages.</p>
          <p>
            A model-driven approach to multimedia document
analysis
We propose the use of the transformation contracts approach to analyze
multimedia documents. Figure 3 refines Figure 2 and illustrates our approach
pictorially with NCL as the multimedia authoring language and Maude [
            <xref ref-type="bibr" rid="ref7">6</xref>
            ] as the
specification language for formalizing multimedia documents. Informally, Maude
modules are produced from NCL documents and the behavioral properties are
represented as LTL formulae which are verified using the Maude model checker.
          </p>
          <p>
            An important element of our approach is the so-called modeling language
for the Simple Hypermedia Model (SHM) [
            <xref ref-type="bibr" rid="ref9">8</xref>
            ]. SHM models are important for
two reasons: (i) they give formal meaning to NCM models, and (ii) should be a
general formal representation for multimedia documents. SHM models are
essentially transition systems that have basic elements to represent multimedia
documents such as anchors as states, events as actions and links as transitions.From
SHM models we could produce representations in different formalisms such as
Maude or SMV [
            <xref ref-type="bibr" rid="ref12">11</xref>
            ]. Behavioral properties of well-formed models that hold the
structural properties of a given authoring language are then checked at the
concrete level such as Maude or SMV.
          </p>
          <p>
            Let us go through each step of Figure 3. First, an NCL document is parsed
into an NCM [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ] model. (NCM is the conceptual model that NCL documents
are based on and may be understood as its abstract syntax.) Thus, given an NCL
document d, if (dˆ = parse(d)) |= PN CM, that is, if the structural properties of
NCM hold in dˆ (such as non-circular nested compositions) then a model
transformation τN CM is applied on dˆ. Given that a proper SHM model sˆ is produced
by the application of the transformation contract from NCM to SHM, that is,
essentially, its states are built properly from anchors, actions properly built from
events and transitions properly built from links, a concrete representation of sˆ
may be produced in the specification language of the model checker, such as
Maude.
          </p>
          <p>sˆ ∈ SHM,
d ∈ NCL −p−ar−s−in→g ddˆˆ |∈=NPNC MCM, −τ−N−C−M→ sˆ |= PSHM, −p−re−t−ty−-p−r−in−ti−n→g md ∈ Maude
k |= PK
Fig. 3. A transformation contract approach to Maude theories from NCL documents</p>
          <p>Given md, which is well-formed and in conformance with K = N CM onA
SHM, one can now verify with a model-checker the temporal formulae that
represent the behavioral properties exemplified at the beginning of Section 1
(such as unreachability of document parts) and document specific properties,
defined by the document author and transformed into temporal formulae.
Counterexamples produced by the model-checker, which are essentially traces that do
not have the desired temporal formulae, may be presented back to the
document author as sequences of links representing SHM transitions that correspond
to transitions (or rewrites, in the case of Maude) of the faulty path encountered
by the model checker. This process is illustrated pictorially in Figure 4, where
d∈NCL
−−−−→
NCL author l∈(NCLLinks )∗ NCL Analyzer = τ (parse(d)) ` modelCheck (s0, φ)
←−−−−−−−−−</p>
          <p>Fig. 4. NCL Analyzer
NCL Analyzer is the tool that essentially invokes the Maude model checker,
represented in Figure 4 by the command modelCheck, which checks for the
property φ (a conjunction of the behavioral properties together with author-defined
properties) using the specification (actually, rewrite theory) given by τ (parse(d))
using s0 as initial state (specified by the initial conditions of document d).</p>
          <p>As mentioned before, SHM is intended to be a general multimedia model.
The verification of multimedia documents specified with languages different from
NCL, such as SMIL and HTML5, would require transformations from the
abstract syntax of those languages to SHM together with a proper mapping from
counter-examples of the chosen model-checker to the authoring language. The
remaining of the analysis process is reused among those different languages.</p>
          <p>We have a first attempt at SHM and a prototype tool that transforms NCL
to Maude modules. Section 4 briefly discusses preliminary results.
4</p>
          <p>
            Preliminary results
Part of the proposed solution is prototyped in a tool presented in [
            <xref ref-type="bibr" rid="ref9">8</xref>
            ], where
the first author, under the supervision of the remaining authors, proposed an
implementation of a transformer from NCL documents to Maude modules. With
that prototype it was possible to analyze structural and behavioral properties of
NCL documents. Besides, the prototype gives us the intuition that the proposed
solution seems to be appropriate.
          </p>
          <p>The prototype was used in several small experiments with simple documents.
Besides it was used with two non-trivial documents created by the Brazilian
Digital TV community. A description of the two documents (“First Jo˜ao” and
“Live More”) and their results are presented here.</p>
          <p>“First Jo˜ao” is an interactive TV application that presents an animation
inspired in a chronicle about a famous Brazilian soccer player named
Garrincha. It plays an animation, an audio and a background image. At the moment
Garrincha dribbles the opponent, a video of kids performing the same dribble
is presented and when his opponent falls on the ground, a photo of a kid in the
same position is presented. The user may interact with the application pressing
the red key at the moment a soccer shoes icon appears. The animation is resized
and a video of a kid thinking about shoes starts playing.</p>
          <p>This document was deployed by the authors of the NCL language as a sample
document. As expected, the document is consistent with respect to the structural
properties (PN CM), defined taking into account the NCM grammar, and the
behavioral properties (PSHM), from the set of parameterized properties. It was
possible to verify that every anchor is reachable and has an end. Besides, the
document as a whole ends.</p>
          <p>“Live More” is an application that presents a TV show discussing health and
welfare. Once the TV show starts playing, an interaction icon appears. If the user
presses the red key of the remote control, four different food options appear. The
user can choose a dish by pressing one of the colored keys of the remote control.
When a dish is chosen, the TV user is informed about the quality of his choice,
telling whether there are missing nutrients or nutrients in excess.</p>
          <p>This document is consistent with respect to the structural properties (PN CM).
However, the document is not consistent with respect to the behavioral
properties (PSHM). It was possible to verify that once a dish is chosen, the anchor
representing the chosen dish and its result do not end, and consequently the
document as a whole.</p>
          <p>The proposed prototype allows NCL document authors to verify if their
document fails in one of the common undesired properties, besides validating the
document structure. From the tests done with NCL documents it was possible
to identify refinements in our Maude specification of SHM. Such refinements
and open issues are addressed in the next section.
5</p>
          <p>Conclusion
In this paper we presented an approach for the analysis of multimedia documents
and a prototype tool that partially implements it. This section discusses future
directions to our research project.</p>
          <p>
            We are currently working on a refinement of the specification for SHM in [
            <xref ref-type="bibr" rid="ref9">8</xref>
            ],
its Maude representation (to improve the efficiency of model checking it) and on
a formal proof for the transformation τN CM.
          </p>
          <p>An important future work it to evaluate the generality of our approach,
exploring mappings from different authoring languages to SHM, as indicated
in the end of Section 3.</p>
          <p>Our preliminary results consider predefined properties representing patterns
of behavior of multimedia documents (see Section 3.) We plan to incorporate
user-defined behavioral properties, by allowing the author to define such
properties in a structured natural language (English, for example) that could be
translated to LTL formulae.</p>
          <p>We also consider evaluating the usability of the tool resulting from this
project using human-computer interaction techniques.
2. C. Braga. A transformation contract to generate aspects from access control
policies. Journal of Software and Systems Modeling, 10(3):395–409, 2010.
3. C. Braga, R. Menezes, T. Comicio, C. Santos, and E. Landim. On the
specification verification and implementation of model transformations with transformation
contracts. In 14th Brazilian Symposium on Formal Methods, volume 7021, pages
108–123, 2011.
4. C. Braga, R. Menezes, T. Comicio, C. Santos, and E. Landim. Transformation
contracts in practice. IET Software, 6(1):16–32, 2012.
5. E. M. Clarke, O. Grumberg, and D. A. Peled. Model Checking. The MIT Press,
2000.
6. M. Clavel, S. Eker, F. Dura´n, P. Lincoln, N. Mart´ı-Oliet, and J. Meseguer. All
about Maude - A High-performance Logical Framework: how to Specify, Program,
and Verify Systems in Rewriting Logic. Springer-Verlag, 2007.
7. M.C.F. de Oliveira, M.A.S. Turine, and P.C. Masiero. A statechart-based model
for hypermedia applications. ACM Transactions on Information Systems, 19(1):52,
2001.
8. J. A. F. dos Santos. Multimedia and hypermedia document validation and
verification using a model-driven approach. Master’s thesis, Universidade Federal
Fluminense, 2012.
9. ITU. Nested Context Language (NCL) and Ginga-NCL for IPTV services.</p>
          <p>http://www.itu.int/rec/T-REC-H.761-200904-S, 2009.
10. D. P. Ju´nior, J. Farines, and C. A. S. Santos. Uma abordagem MDE para
Modelagem e Verifica¸ca˜o de Documentos Multim´ıdia Interativos. In WebMedia, 2011.
in Portuguese.
11. K.L. McMillan. Symbolic model checking: an approach to the state explosion
problem. Kluwer Academic Publishers, 1993.
12. J.C. Na and R. Furuta. Dynamic documents: authoring, browsing, and analysis
using a high-level petri net-based hypermedia system. In ACM Symposium on
Document engineering, pages 38–47. ACM, 2001.
13. C.A.S. Santos, L.F.G. Soares, G.L. de Souza, and J.P. Courtiat. Design
methodology and formal validation of hypermedia documents. In ACM International
Conference on Multimedia, pages 39–48. ACM, 1998.
14. L. F. G. Soares, R. F. Rodrigues, and D. C. Muchaluat-Saade. Modeling,
authoring and formatting hypermedia documents in the HyperProp system. Multimedia
Systems, 2000.
15. W3C. Synchronized Multimedia Integration Language - SMIL 3.0 Specification.</p>
          <p>http://www.w3c.org/TR/SMIL3, 2008.
16. W3C. HTML5: A vocabulary and associated APIs for HTML and XHTML.
http://www.w3.org/TR/html5/, 2011.</p>
        </sec>
        <sec id="sec-3-2-6">
          <title>SMADL: The Social Machines Architecture</title>
        </sec>
        <sec id="sec-3-2-7">
          <title>Description Language</title>
          <p>Leandro Marques do Nascimento1,2, Vinicius Cardoso Garcia1,</p>
          <p>
            Silvio R. L. Meira1
1 Informatics Center - Federal University of Pernambuco (UFPE) ,
2 Department of Informatics - Federal Rural University of Pernambuco (UFRPE)
{lmn2, vcg, srml}@cin.ufpe.br
Abstract. We are experiencing a high growth in the number of web
applications being developed. This is happening mainly because the web
is going into a new phase, called programmable web, where several
webbased systems make their APIs publicly available. In order to deal with
the complexity of this emerging web, we define a notion of social
machine and envisage a language that can describe networks of such. To
start with, social machines are defined as tuples of input, output,
processes, constraints, states, requests and responses; apart from defining
the machines themselves, the language defines a set of connectors and
conditionals that can be used to describe the interactions between any
number of machines in a multitude of ways, as a means to represent
real machines interacting in the real web. This work presents a
preliminary version of the Social Machine Architecture Description Language
(SMADL).
1
Software systems are built upon programming languages. A programming
language is a notation for expressing computations (algorithms) in both machine
and human readable form. Appropriate languages and tools may drastically
reduce the cost of building new applications as well as maintaining existing ones
[
            <xref ref-type="bibr" rid="ref1 ref2">1</xref>
            ].
          </p>
          <p>
            In the context of programming languages, a Domain-Specific Language (DSL)
is a language that provides constructs and notations tailored toward a particular
application domain [
            <xref ref-type="bibr" rid="ref3">2</xref>
            ]. Usually, DSLs are small, more declarative than
imperative, and more attractive than General-Purpose Languages (GPL) for their
particular application domain.
          </p>
          <p>
            However, in software engineering several different artifacts are developed
besides code and one of the most important is the software architecture. Most
developers agree that architecture is needed in some way, shape, or form, but, they
can’t agree on a definition, don’t know how to manage it efficiently in nontrivial
projects, and usually can’t express a system’s architectural abstractions precisely
and concisely [
            <xref ref-type="bibr" rid="ref4">3</xref>
            ]. When asking a developer to describe a system’s architecture
Voelter [
            <xref ref-type="bibr" rid="ref4">3</xref>
            ] says “I get responses that include specific technologies, buzzwords
such as AJAX (asynchronous JavaScript and XML) or SOA (service-oriented
architecture), or vague notions of “components” (such as publishing, catalog, or
payment). Some have wallpaper-sized UML diagrams in which the meanings of
the boxes and lines aren’t clear.”
          </p>
          <p>These answers mention aspects that are actually related to a system’s
architecture, but none of them represent an unambiguous and/or “formal”
description of a system’s core abstractions. Indeed, it is not surprising because,
although there are languages that directly express software architectures, they
are not quite common among software developers.</p>
          <p>In order to better define software architectures, it is worthy using DSLs and
taking advantage of their expressiveness in a limited domain. Our proposal relies
on top of an Architecture Description Language (ADL) for describing web based
software systems in terms of Social Machines, a new concept developed by our
research group which tries to increase the abstraction level for comprehending
the web. Next, we present the context and the details about Social Machines.
2</p>
          <p>
            An Emerging Web of Social Machines
The traditional concept of software has been changing during the last decades.
Since the first definition of a computing machine described by Turing in [
            <xref ref-type="bibr" rid="ref5">4</xref>
            ],
software started to become part of our lives and has been turned pervasive and
ubiquitous with the introduction of personal computers, the internet,
smartphones and recently the internet of things. In fact, one can say that software
and the internet changed the way we communicate, the way business is done
and the way software is developed, deployed and used. Nowadays, computing
means connecting [
            <xref ref-type="bibr" rid="ref6">5</xref>
            ] and sometimes it is said that developing software is the
same as connecting services [
            <xref ref-type="bibr" rid="ref7">6</xref>
            ], since there are several up and running software
services available.
          </p>
          <p>Recently, we all can clearly see that a new phase is emerging, the web “3.0”,
the web as a programming platform, the network as an infrastructure for
innovation, on top of which all and sundry can start developing, deploying and
providing information services using the computing, communication and control
infrastructures in a way fairly similar to utilities such as electricity.</p>
          <p>
            An overview of this Web 3.0 scenario can be seen in the ProgrammableWeb
website1. It gathers around 6500 publicly available APIs and more than 6700
mashups using them (last visit in July 2012). Although there have been many
studies about the future of the internet and concepts such as web 3.0,
programmable web [
            <xref ref-type="bibr" rid="ref8 ref9">7, 8</xref>
            ], linked data [
            <xref ref-type="bibr" rid="ref10">9</xref>
            ] and semantic web [
            <xref ref-type="bibr" rid="ref11 ref12">10, 11</xref>
            ], the
segmentation of data and the issues regarding the communication among systems
obfuscates the interpretation of this future. Unstructured data, unreliable parts and
non-scalable protocols are all native characteristics of the internet that needs a
unifying view and explanations in order to be developed, deployed and used in
a more efficient and effective way.
          </p>
          <p>
            Furthermore, the Web concepts, as we know, are recent enough to represent
many serious difficulties while understanding their basic elements and how they
can be efficiently combined to develop real, practical systems in either personal,
social or enterprise contexts. Therefore, we developed a new concept called Social
Machine (SM), in order to provide a common and coherent conceptual basis
for understanding this still immature, upcoming and possibly highly innovative
phase of software development. SM concept was firstly conceived in [
            <xref ref-type="bibr" rid="ref13">12</xref>
            ] and later
demonstrated with a case study in [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ].
          </p>
          <p>So, we define a SM as a tuple, as following:</p>
          <p>SM = &lt;Rel, WI, Req, Resp, S, Const, I, P, O&gt;</p>
          <p>
            In general, a SM represents a connectable and programmable entity
containing an internal processing unit (P) and a wrapper interface (WI) that waits for
requests (Req) from and replies [with responses (Resp)] to other social
machines. Its processing unit receives inputs (I), produces outputs (O) and has
states (S); and its connections define intermittent or permanent relationships
(Rel) with other SMs. These relationships are connections established under
specific sets of constraints (Const). Our goal with this concept of a Social
Machine is not to formally describe software services as can be seen in [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ], but
instead we want to describe the programmable web in a higher level of
abstraction, thus increasing the power of new programming structures or paradigms
dedicated to this context. Figure 1 illustrates a basic representation of a Social
Machine.
          </p>
          <p>Fig. 1. A graphical representation of a Social Machine.</p>
          <p>The idea behind Social Machines is to take advantage of the networked
environment they are in to make it easier to combine and reuse exiting services
from different SMs and use them to implement new ones. Hence, we can
highlight some of its main characteristics, as following: Sociability, Compositionality,</p>
          <p>Platform and Implementation independency, Self-awareness, Discoverability and
last, but not least, Programmability.</p>
          <p>There may be different types of social machines, but one way to classify
them is through the simple taxonomy shown in Figure 2, based on the types of
interactions they have with each other, as follows:
– Isolated - Social Machines that have no interaction with other Social
Machines;
– Provider - Social Machines that provide services for other Social Machines
to consume;
– Consumer - Social Machines that consume services that other Social
Machines provide;
– Prosumer - Social Machines that both provide and consume services.</p>
          <p>Fig. 2. Social Machines as a partial order diagram.</p>
          <p>In this work, we envisage an Architecture Description Language that can
describe networks of SMs. Apart from defining the machines themselves, the
ADL defines a set of connectors and conditionals that can be used to describe the
interactions between any number of machines in a multitude of ways, as a means
to represent real machines interacting in the real web. Details are presented next.
3</p>
          <p>The Social Machines Architecture Description</p>
          <p>Language - SMADL
This work is an attempt to answer the following research questions: “Is it possible
to integrate diverse web applications using a standard architecture description
language? ”. In order to answer it, this work purposes a new ADL for defining
social machines: SMADL.</p>
          <p>Social Machines can be connected (or establish a relationship) in basically
two phases: in the first phase, the SMs must find each other, and a there must
be a SM registry service much likely Internet DNS; in the second phase the SMs
actually connect to each other and exchange information for a limited period
of time. The SM registry service is out of the scope of this proposal. We are
assuming SMs can find each other without much effort.</p>
          <p>In order to comprise these two phases, SMADL is composed by two
minilanguages:
– VCL (Visitor Card Language): presents the externally visible properties
of a SM, i.e., which requests and responses it accepts, which types of
inputs/outputs it handles, if an internal state is maintained and/or how many
requests it can handle per amount of time. A vCard is provided by the SM
registry to the consumer SM, so it can decide if the relationship is
interesting or not. Business issues are also present in a SM vCard, such as, billing
information and service level agreements. Nowadays, popular web APIs do
not make available such business information in a programmatic way.
Usually, there are only few lines in reduced font size contracts mentioning that
important information.
– WIL (Wrapper Interface Language): we are assuming every SM has a vCard
with which the wrapper interface is fully complaint. This language is
responsible for actually connecting SMs, establishing pre and post conditions,
applying different connectors in a SM composition, and implementing business
rules associated with a given set of SMs. Our proposal is to use WIL not as
substitute for the currently available technologies. Instead it increases the
level of abstraction of these technologies, freeing the programmer to
concentrate on business issues of the SM relationships.</p>
          <p>To understand better how these mini-languages are used, Figure 3 shows the
steps for establishing a relationship between two SMs, as following:
1. Initially, the requester SM, in this case represented by Evernote (upper left
icon), searches for some SM registered as a micro blog, in our case
Twitter (upper right icon). Note that Evernote presents its vCard to DNS while
searching for a service, once the responder may or may not accept
connections with that specific SM.
2. The SM DNS finds Twitter and requests its vCard.
3. Twitter responds with its vCard, accepting the relationship.
4. The SM DNS replies back to Evernote with the Twitter vCard, which
includes its address.
5. Using Twitter vCard and knowing its wrapper interface, Evernote establishes
a relationship with Twitter, following all conditions imposed.</p>
          <p>
            There are several popular technologies for integrating web based or service
oriented systems, such, REST [
            <xref ref-type="bibr" rid="ref16">15</xref>
            ] and OSGi [
            <xref ref-type="bibr" rid="ref17">16</xref>
            ]. The current version of SMADL
generates code for REST based apps, as it is becoming the most popular on
the web, adopted by big players such as Facebook and Google. According to
ProgrammableWeb site 4300 out of approximately 6500 APIs uses REST as
base technology.
          </p>
          <p>
            SMADL is being developed on Xtext language workbench [
            <xref ref-type="bibr" rid="ref18">17</xref>
            ]. As this is a
work in progress, we are preliminarily evaluating alpha versions of the language
and planning an experiment using the approach proposed by [18].
We performed a systematic mapping study [19] for better understanding the
DSL/ADL research field as shown in [20]. Initially, 4450 studies were identified,
and, after filtering, 1440 primary studies were selected and categorized. Among
all those primary studies, different methods/techniques for handling DSLs
(creating, evolving, maintaining, testing) could be listed and several DSLs applied
to several different domains could be identified. The domain where DSLs are
most frequently applied is the Web domain. Other domains such as embedded
systems, data intensive apps, and control systems where quite common too.
          </p>
          <p>In our study we could enumerate 30 publications directly related to ADL.
Amongst them, only two of them mention the Web domain, both from 2010. In
the first one [21], the authors propose to formalize the architectural model using
domain-specific language, an ADL which supports the description of dynamic,
adaptive and evolvable architectures, such as SOA itself. Their ADL allows the
definition of executable ver-sions of the architecture. The second one is [?] which
presents a framework for the implementation of best practices concerning the
design of the software architecture. The authors present an implementation of the
framework in the Eclipse platform and an ADL dedicated to Web applications.</p>
          <p>In addition, practical examples, such as Yahoo! Pipes2 and IfThisThenThat 3
can be seen as related work. The former uses a graphical tool for customizing
data flows from different sources. The latter allows end users to program the web
based on pre-defined events fired by a set of channels, for example, if someone
tags you on a given social network (channel 1), then save this photo in the
person’s virtual drive (channel 2). The user can choose among different events
from different channels, which are, in practice, websites that make available
their APIs. Our work is an attempt to be a completely different way to program
2 http://pipes.yahoo.com
3 http://ifttt.com/
the Web, not based on pre-defined parameters. The idea behind SMADL is to
actually define every public API in the Web and the relationships among them.
This way, composition possibilities for several SMs can be infinite.</p>
          <p>As can be seen, this is a relatively new research field and we believe we can
make a considerable contribution by establishing the concept of a Social Machine
and developing an ADL for supporting it.
5</p>
          <p>Concluding Remarks and Future Work
This work presents SMADL “The Social Machines Architecture Description
Language” as a possible solution for modeling web-based software systems.</p>
          <p>In general, a Social Machine (SM) represents a connectable and programmable
entity containing an internal processing unit (P) and a wrapper interface (WI)
that waits for requests (Req) from and replies [with responses (Resp)] to other
social machines. Its processing unit receives inputs (I), produces outputs (O)
and has states (S); and its connections define intermittent or permanent
relationships (Rel) with other SMs. These relationships are connections established
under specific sets of constraints (Const).</p>
          <p>Our main goal is to use SMADL to describe SM relationships and then we
can have one unique way to program the web, independently of what
technology/platform is being used. The current version (alpha) of SMADL generates
code for REST based apps, as it is becoming the most popular on the web,
adopted by big players such as Facebook and Google. Nowadays, we are
working on several sample apps which have their architecture written in SMADL.
These apps are basically consumer SMs, it means they do not make available
public features, but only consumes other public APIs. These apps are going to
be distributed as open source.</p>
          <p>Our next steps include writing fully prosumer social machines, i.e.
applications that connects to others, process their data, and someway make this data
available for others to consume. At this phase, we are planning to perform an
experiment, following the methodology described in [18].</p>
          <p>Acknowledgments. This work was partially supported by the National
Institute of Science and Technology for Software Engineering (INES ), funded
by CNPq and FACEPE, grants 573964/2008-4, APQ-1037-1.03/08 and
APQ1044-1.03/10 and Brazilian Agency (CNPq processes number 475743/2007-5 and
140060/2008-1).</p>
          <p>References
1. Pressman, R.S.: Software engineering: a practitioner’s approach (2nd ed.).</p>
          <p>McGraw-Hill, Inc., New York, NY, USA (1986)
2. Mernik, M., Heering, J., Sloane, A.M.: When and how to develop domain-specific
languages. ACM Comput. Surv. 37(4) (December 2005) 316–344
3. Vo¨lter, M.: Architecture as language. IEEE Software 27(2) (2010) 56–64
4. Turing, A.M.: On computable numbers, with an application to the
entscheidungsproblem. Proceedings of the London Mathematical Society 42 (1936) 230–
265
5. Roush, W.: Social Machines. Technology Review (2006) 1–18
6. Turner, M., Budgen, D., Brereton, P.: Turning software into a service. Computer
36(10) (October 2003) 38–44
7. Yu, S., Woodard, C.J.: Service-oriented computing — icsoc 2008 workshops.</p>
          <p>Springer-Verlag, Berlin, Heidelberg (2009) 136–147
8. Hwang, J., Altmann, J., Kim, K.: The structural evolution of the web 2.0 service
network. Online Information Review 33(6) (2009) 1040–1057
9. Bizer, C., Heath, T., Berners-Lee, T.: Linked data - the story so far. Int. J.</p>
          <p>Semantic Web Inf. Syst. 5(3) (2009) 1–22
10. Berners-Lee, T., Hendler, J., Lassila, O.: The Semantic Web. Scientific American
284(5) (2001) 34–43
11. Hitzler, P., Krtzsch, M., Rudolph, S.: Foundations of Semantic Web Technologies.</p>
          <p>1st edn. Chapman &amp; Hall/CRC (2009)
12. Meira, S.R.L., Bur´egio, V.A., Nascimento, L.M., de Figueiredo, E.G.M., Neto, M.,
Encarnac¸a˜o, B.P., Garcia, V.C.: The Emerging Web of Social Machines. CoRR
abs/1010.3 (2010)
13. Meira, S.R.L., Buregio, V.A.A., Nascimento, L.M., Figueiredo, E., Neto, M.,
Encarnacao, B., Garcia, V.C.: The Emerging Web of Social Machines. In: 2011 IEEE
35th Annual Computer Software and Applications Conference, IEEE (July 2011)
26–27
14. Broy, M., Kru¨ger, I.H., Meisinger, M.: A formal model of services. ACM Trans.</p>
          <p>Softw. Eng. Methodol. 16(1) (February 2007)
15. Richardson, L., Ruby, S.: Restful web services. First edn. O’Reilly (2007)
16. Hall, R.S., Pauls, K., McCulloch, S., Savage, D.: OSGi in Action: Creating Modular</p>
          <p>Applications in Java. Volume 188. Manning (2010)
17. Eysholdt, M., Behrens, H.: Xtext: implement your language faster than the quick
and dirty way. In: Proceedings of the ACM international conference companion
on Object oriented programming systems languages and applications companion.</p>
          <p>SPLASH ’10, New York, NY, USA, ACM (2010) 307–309
18. Juristo, N., Moreno, A.: Basics of Software Engineering Experimentation. Springer
(2001)
19. Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies
in software engineering. In: Proceedings of the 12th international conference on
Evaluation and Assessment in Software Engineering. EASE’08, Swinton, UK, UK,
British Computer Society (2008) 68–77
20. Nascimento, L.M., Viana, D.L., da Mota Silveira Neto, P.A., Souto, S.F., Martins,
D.A.O., Garcia, V.C., Meira, S.R.L.M.: Domain-Specific Languages - A Systematic
Mapping Study. In: Proceeedings of 7th International Conference on Software
Engineering Advances (ICSEA). (2012)
21. L´opez-Sanz, M., Cuesta, C.E., Marcos, E.: Formalizing high-level service-oriented
architectural models using a dynamic adl. In: Proceedings of the 2010 international
conference on On the move to meaningful internet systems. OTM’10, Berlin,
Heidelberg, Springer-Verlag (2010) 57–66</p>
          <p>Ted Kaminski
Department of Computer Science and Engineering
University of Minnesota, Minneapolis, MN, USA</p>
          <p>
            tedinski@cs.umn.edu
Abstract. Domain-specific languages offer a variety of advantages, but
their implementation techniques have disadvantages that sometimes
prevent their use in practice. Language extension offers a potential solution
to some of these problems, but remains essentially unused in practice. It
is our contention that the main obstacle to adoption is the lack of any
assurance that the compiler composed of multiple independent language
extensions will work without the need for additional modifications, or at
all. We propose to solve this problem by requiring extensions to
independently pass a composition test that will ensure that any such extensions
can be safely composed without “glue code,” and we propose to
demonstrate that interesting extensions are still possible that satisfy such a
test.
1
Domain-specific languages (DSLs) come with a variety of reasonably well-known
advantages and disadvantages [
            <xref ref-type="bibr" rid="ref4">3</xref>
            ]. Some of these disadvantages do not seem to
be inherent to DSLs in general, but are a consequence of the way they are
implemented. In particular, many implementation techniques lack or poorly support
composition, meaning multiple DSLs cannot easily be used together to solve a
problem.
          </p>
          <p>
            To be more precise about what we mean by language composition, we will
use some of the classification and notation of Erdweg, Giarrusso, and Rendel [
            <xref ref-type="bibr" rid="ref6">5</xref>
            ].
The notation H / E represents a host language H composed with a language
extension E, specifically crafted for H. Another composition operator L1 ]g L2
denotes the composition of two distinct languages with “glue code” g. To permit
only the / form of language composition (“language extension”) is not sufficient.
With H, H / E1, and H / E2, we are left with no option for composing all three,
without modifying one of the extensions to have the form (H / E1) / E2 (or vice
versa.) However, the ]g form of language composition (“language unification”)
is also insufficient for our purposes. The problem with this form of composition
is that the “glue code” g necessary to perform this composition is essentially an
admission that the composition is broken and must be repaired. (Though it is
still interesting that the composition can be repaired.)
          </p>
          <p>What we seek is a composition method L1 ]∅ L2, that is, language unification
without needing any glue code (g = ∅.) This may seem impossible in general,
but there is hope in special cases, such as when both languages are extensions
to a common host: (H / E1) ]∅ (H / E2). Here we are tasked with resolving only
conflicts between E1 and E2, while the host language H is shared. We will say
that a DSL implementation technique supports composable language extension
if it is capable of composition of the form H / (E1 ]∅ E2). We further require that
the technique provides some assurance that the resulting composed language will
work as intended, and is not simply broken.</p>
          <p>The goal of this work is to build a DSL implementation tool and demonstrate
that it satisfies the following criteria:
– Supports composable language extension, as defined above.
– Permits introduction of new syntax.
– Permits introduction of new static analysis on existing syntax.
– Capable of generating good, domain-specific error messages.
– Capable of complex translation, such as domain-specific optimizations.</p>
          <p>In Section 2 we provide some background on the tools we will be making
use of in pursuit of this goal. In Section 2.1 we survey some of the other tools
for implementing domain-specific languages. In Section 3 we propose the work
we plan for this thesis. In Section 3.1 we outline work beyond the scope of this
thesis.
2</p>
          <p>
            Background
The first major obstacle to supporting composable language extension is to
allow composition of syntax extensions. Although context-free grammars are easily
composed, the resulting composition may no longer be deterministic, or
otherwise amenable to parser generation. Copper [
            <xref ref-type="bibr" rid="ref16">15, 20</xref>
            ] is an LR(1) parser generator
that supports syntax composition of the form H / (E1 ]∅ E2) so long as each
H / E individually satisfy some conditions of its modular determinism
analysis. Assuming we require extensions to satisfy this analysis, Copper offers one
solution to the syntax side of the problem of supporting composable language
extension.
          </p>
          <p>
            Attribute grammars [
            <xref ref-type="bibr" rid="ref14">13</xref>
            ] are a formalism for describing computations over
trees. Trees formed from an underlying context-free grammar are attributed with
synthesized and inherited attributes, allowing information to flow, respectively,
up and down the tree. Each production in the grammar specifies equations that
define the synthesized attributes on its corresponding nodes in the tree, as well as
the inherited attributes on the children of those nodes. These equations defining
the value of an attribute on a node may depend on the values of other attributes
on itself and its children. Attribute grammars trivially support both the
“language extension” and “language unification” modes of language composition, by
simply aggregating declarations of nonterminals, productions, attributes, and
semantic equations.
          </p>
          <p>There is a natural conflict between introducing new syntax and static
analysis, referred to as the “expression problem1.” Although normally formulated in
terms of data types, it applies equally well to abstract syntax trees, and thus
has consequences for language extension. If one language extension introduces
new syntax, and another a new analysis, the combination of the two extensions
would be missing the implementation of this analysis for this syntax. Either the
composition is then broken, glue code must be written to bridge this conflict,
or there must be some mechanism to accurately and automatically generate this
glue code.</p>
          <p>Attribute grammars are capable of solving the expression problem by
manually providing “glue code” that provides for evaluating new attributes on new
productions. However, the expression problem can also be automatically resolved
without glue code for attribute grammars that include forwarding [19]. An
extension production that forwards to a “semantically equivalent” tree in the host
language can evaluate new attributes introduced in other extensions via that
host language tree, where the attribute will have defining semantic equations.</p>
          <p>
            Although forwarding removes the need for the “glue code” necessary to
resolve the expression problem, there are other ways in which a composition
of attribute grammars may cause conflicts. Attribute grammars have a
“welldefinedness” property that, roughly speaking, ensures each attribute can
actually be evaluated. However, although H, H / E1 and H / E2 may be well-defined,
there is no guarantee that H / (E1 ]∅ E2) will also be well-defined. As part of this
thesis, we have developed a modular well-definedness analysis [
            <xref ref-type="bibr" rid="ref12">11</xref>
            ] that provides
this guarantee. This analysis checks each H / E individually, and ensures that
the composition H / (E1 ]∅ E2) will be well-defined.
2.1
          </p>
          <p>
            Related work
Domain-specific languages are traditionally implemented as an “external” DSL,
and therefore incapable of composition with each other. Internal (or Embedded
DSLs) are those implemented as a “mere” library in a suitable host language [
            <xref ref-type="bibr" rid="ref10">9</xref>
            ].
Internal DSLs are interesting in part because they permit the kind of composition
we are interested in. However, they come with many drawbacks. For one, not
all languages are practical choices for internal DSLs, including many that are in
popular use, because the range of possible syntax is seriously limited by the host
language. Further, in their simplest form, internal DSLs cannot easily perform
domain-specific analysis, or complex translation.
          </p>
          <p>
            One way of making internal DSLs capable of domain-specific analysis is to
take advantage of complex embeddings into the host language’s type system.
AspectAG [21] and Ur/Web [
            <xref ref-type="bibr" rid="ref3">2</xref>
            ] are internal DSLs that take this approach to
enforcing certain properties. The drawback to these approaches is the error
messages: they are reported as type errors in the host language’s interpretation of
the types. In the worst case, understanding these error messages requires not
just a deep understanding of the property being checked, but also the particular
implementation and embedding of that property into the host language’s type
system.
          </p>
          <p>One way to improve the ability of internal DSLs to generate code is to take
advantage of meta-programming facilities in the language, like LISP macros, or</p>
          <p>
            C++ templates. Racket [
            <xref ref-type="bibr" rid="ref18">17</xref>
            ] offers sophisticated forms of macros to enable this
kind of translation. However, the static analysis capabilities of these macros are
quite limited, though they are able to generate surprisingly good error messages
for a macro system. (Especially surprising for those used to C++ template error
messages.)
          </p>
          <p>
            There are several systems for specifying languages that enable language
extension and unification, as described in the introduction. JastAdd [
            <xref ref-type="bibr" rid="ref5 ref8">7, 4</xref>
            ], Kiama [
            <xref ref-type="bibr" rid="ref17">16</xref>
            ],
and UUAG [
            <xref ref-type="bibr" rid="ref1 ref2">1</xref>
            ] are such systems based upon attribute grammars. SugarJ [
            <xref ref-type="bibr" rid="ref7">6</xref>
            ] is
a recent system built upon SDF [
            <xref ref-type="bibr" rid="ref9">8</xref>
            ] and Stratego [22]. Rascal [
            <xref ref-type="bibr" rid="ref13">12</xref>
            ] is a
metaprogramming language with numerous high-level constructs for analyzing and
manipulating programs. Helvetia [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ] is a dynamic language based upon Smalltalk
with language extension capabilities. However each of these systems requires that
the composition of multiple language extensions may need to be repaired with
glue code, and they otherwise provide little guarantee the composition will work.
As a result, they do not support composable language extension, in our sense.
          </p>
          <p>MPS [23] is a meta-programming environment that leans heavily on an
object-oriented view of abstract syntax, and consequently struggles with
expression problem in its support for composition. Consequently, the host language
limits the possible analyses over syntax that are possible. Many useful language
extensions do not necessarily need new analysis over the host language, however,
as macro systems for dynamic languages already demonstrate.</p>
          <p>
            Proposal
One component of this thesis has already been mentioned: our modular
welldefinedness analysis for attribute grammars [
            <xref ref-type="bibr" rid="ref12">11</xref>
            ]. This work is fully described
elsewhere, but we will summarize it here. We say that an attribute grammar is
effectively complete if, during attribute evaluation, no attribute is ever demanded
that lacks a defining semantic equation. This analysis operates on each H / E
individually, and provides an assurances that the resulting H /(E1 ]∅ E2) will also
have this property, without the need to explicitly check this composed language.
To do this, the analysis is necessarily conservative about what extensions pass.
Roughly speaking, extensions must satisfy the following requirements:
– Extensions must not alter the flow types of host language synthesized
attributes. That is, they cannot require new (extension) inherited attributes be
supplied in order to evaluate existing (host language) synthesized attributes.
– New productions introduced in extensions must forward.
– The flow types for new attributes introduced by an extension must account
for the potential need to evaluate forward equations before they can be
evaluated.
          </p>
          <p>
            This modular well-definedness analysis, together with Copper’s modular
determinism analysis, offers a potential path towards composable language
extension. Silver [
            <xref ref-type="bibr" rid="ref11">18, 10</xref>
            ] is an attribute grammar-based language with support for
Copper, for which we have implemented our modular well-definedness analysis.
As the remainder of this thesis, we propose to evaluate whether this tool is truly
capable of composable language extension. This is not a given, because the range
of potential language extensions has been restricted:
– Forwarding requires all extensions’ dynamic semantics be expressible in terms
of the host language. We do not anticipate this restriction being a burden, as
the host languages we’re interested in extending are Turing-complete with
rich IO semantics.
– Copper’s analysis places restrictions on the syntax that can be introduced by
extensions, relative to their host language. Again, since the host languages
we are interested in extending often have highly complex concrete syntax
already, we expect these restrictions will be a light burden.
– Silver’s analysis places restrictions on how information can flow around
abstract syntax trees. Again, however, this is relative to the host language
implementation, which we expect to offer support for rich kinds of
information flow already.
          </p>
          <p>In light of these potential restrictions on the kinds of extensions that can be
specified in Silver, we wish to validate each of our goals:
– The analyses themselves accomplish the goal of supporting composable
language extension.
– We will need to implement at least two new extensions to the syntax.
– We will need to implement at least one new extension to static analysis.
– That static analysis extension should demonstrate the ability to generate
good, domain-specific error messages.
– One of the extensions should involve either complex translation, require
domain-specific optimizations, or have at least stringent efficiency
requirements, to demonstrate the approach has little to no runtime overhead.</p>
          <p>We propose to build a host language specification for C in Silver. C is an
ambitious choice, but choosing a rich, practical language of independent design
is necessary to evaluate whether the analyses’ restrictions are practical, as they
depend on the host language. To this specification of C, we propose to build
language extensions that will meet the above requirements. These should ideally
be language extensions that already exist in the literature, so that the changes to
their design or syntax that are necessary to satisfy the analyses can be evaluated.</p>
          <p>From this we hope to learn:
– How to better design extensible host language implementations, to support
the development of interesting extensions. Many of the limitations imposed
by the analyses depend upon the host language implementation more so than
on the host language itself.
– Ways in which Silver itself may need to be extended to help specify the host
language and extensions. For example, proper aggregation of error messages
in the extensions could be ensured with language features specific to error
message aggregation.
– Whether the restrictions still permit interesting and practical language
extensions.
– Informally, whether the resulting extended languages are useful. We intend
for our colleagues to make use of these extended languages, providing some
feedback in this area, though we do not intend to perform an empirical
investigation.
3.1</p>
          <p>Future work
Beyond the scope of this thesis, there lie many more problems that must be
solved to bring language extension to practicality.</p>
          <p>First, host languages must be developed in Silver before they can be
extended, and extensions can only be composed for a common host language, so
fragmentation must be kept to a minimum to avoid splitting apart the
ecosystem. High enough quality implementations of host languages for production use
remains future work.</p>
          <p>Second, numerous less daunting engineering issues would also need resolving.
No obstacles to composing language extensions at runtime exist for Silver and
Copper, but the feature has yet to be fully implemented. Further, the build
process for making use of an extended compiler in large software projects must
be worked out.</p>
          <p>Third, a variety of other tooling must also be composable. Languages
extensions must result not only in composed compilers, but also composed debuggers
and integrated development environments. Abandoning these tools is not an
option for practical use. We do not intend to directly address this problem in this
thesis, though concurrent work for such tools in Silver is ongoing.</p>
          <p>Finally, although these analyses ensure conflicts do not arise from the parser
or attribute evaluator, it is possible that conflicts could arise in some other
fashion. Certainly we can imagine blatantly wrong code, like suppressing all
error messages from subtrees. But the formulation of composable proofs of the
compiler’s correctness would complete our understanding the problem posed by
composable language extension.
4
We believe that no existing DSL implementation tool satisfies all five goals listed
in the introduction: support composable language extension, allow extension
to both and static analysis, provide good domain-specific error messages, and
allow complex translation requirements. These goals are motivated by the desire
to ensure that the users of language extensions can be certain they can draw
on whatever high-quality extensions they need, without fear of breaking their
compiler.</p>
          <p>We have developed an analysis that ensures Silver meets the goal of
supporting composable language extension, and we have implemented this analysis. We
intend to develop an extensible specification of a popular and practical language,
C, and we intended to demonstrate that practical language extensions to it are
possible that satisfy this analysis. We believe this will demonstrate that Silver
satisfies all five goals listed in the introduction for an ideal DSL implementation
technique.</p>
          <p>References</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>F.</given-names>
            <surname>Baader</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>McGuinness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nardi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Patel-Schneider</surname>
          </string-name>
          .
          <article-title>The Description Logic Handbook</article-title>
          . Cambridge University Press,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          1.
          <string-name>
            <surname>Baars</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Swierstra</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Loh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          : Utrecht University AG system manual, http://www.cs.uu.nl/wiki/Center/AttributeGrammarSystem.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          2.
          <string-name>
            <surname>Chlipala</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Ur: statically-typed metaprogramming with type-level record computation</article-title>
          .
          <source>In: PLDI</source>
          ,
          <year>2010</year>
          . pp.
          <fpage>122</fpage>
          -
          <lpage>133</lpage>
          . ACM, New York, NY, USA (
          <year>2010</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/1806596.1806612
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          3.
          <string-name>
            <surname>Deursen</surname>
            ,
            <given-names>A.v.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klint</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Visser</surname>
          </string-name>
          , J.:
          <article-title>Domain-specific languages: An annotated bibliography</article-title>
          .
          <source>ACM SIGPLAN Notices</source>
          <volume>35</volume>
          (
          <issue>6</issue>
          ),
          <fpage>26</fpage>
          -
          <lpage>36</lpage>
          (
          <year>Jun 2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ekman</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hedin</surname>
          </string-name>
          , G.:
          <article-title>Rewritable reference attributed grammars</article-title>
          .
          <source>In: Proc. of ECOOP '04 Conf</source>
          . pp.
          <fpage>144</fpage>
          -
          <lpage>169</lpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          5.
          <string-name>
            <surname>Erdweg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giarrusso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rendel</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Language composition untangled</article-title>
          .
          <source>In: LDTA</source>
          ,
          <year>2012</year>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          6.
          <string-name>
            <surname>Erdweg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rendel</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , Ka¨stner,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Ostermann</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Sugarj: library-based syntactic language extensibility</article-title>
          .
          <source>In: OOPSLA 2011</source>
          . pp.
          <fpage>391</fpage>
          -
          <lpage>406</lpage>
          . ACM, New York, NY, USA (
          <year>2011</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/2048066.2048099
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          7.
          <string-name>
            <surname>Hedin</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Magnusson</surname>
          </string-name>
          , E.:
          <article-title>JastAdd - an aspect oriented compiler construction system</article-title>
          .
          <source>Science of Computer Programming</source>
          <volume>47</volume>
          (
          <issue>1</issue>
          ),
          <fpage>37</fpage>
          -
          <lpage>58</lpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          8.
          <string-name>
            <surname>Heering</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hendriks</surname>
            ,
            <given-names>P.R.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klint</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rekers</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The syntax definition formalism sdf</article-title>
          .
          <source>SIGPLAN Not</source>
          .
          <volume>24</volume>
          (
          <issue>11</issue>
          ),
          <fpage>43</fpage>
          -
          <lpage>75</lpage>
          (
          <year>Nov 1989</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/ 71605.71607
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          9.
          <string-name>
            <surname>Hudak</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Building domain-specific embedded languages</article-title>
          .
          <source>ACM Computing Surveys</source>
          <volume>28</volume>
          (
          <year>4es</year>
          ) (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kaminski</surname>
          </string-name>
          , T.,
          <string-name>
            <surname>Van Wyk</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Integrating attribute grammar and functional programming language features</article-title>
          .
          <source>In: Proceedings of 4th the International Conference on Software Language Engineering (SLE</source>
          <year>2011</year>
          ). LNCS, vol.
          <volume>6940</volume>
          , pp.
          <fpage>263</fpage>
          -
          <lpage>282</lpage>
          . Springer (July
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          11.
          <string-name>
            <surname>Kaminski</surname>
          </string-name>
          , T.,
          <string-name>
            <surname>Van Wyk</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Modular well-definedness analysis for attribute grammars (</article-title>
          <year>2012</year>
          ),
          <source>accepted SLE</source>
          <year>2012</year>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          12.
          <string-name>
            <surname>Klint</surname>
          </string-name>
          , P., van der Storm, T.,
          <string-name>
            <surname>Vinju</surname>
          </string-name>
          , J.:
          <article-title>Rascal: a domain specific language for source code analysis and manipulation</article-title>
          .
          <source>In: Proc. of Source Code Analysis and Manipulation (SCAM</source>
          <year>2009</year>
          )
          <article-title>(</article-title>
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          13.
          <string-name>
            <surname>Knuth</surname>
            ,
            <given-names>D.E.</given-names>
          </string-name>
          :
          <article-title>Semantics of context-free languages</article-title>
          .
          <source>Mathematical Systems Theory</source>
          <volume>2</volume>
          (
          <issue>2</issue>
          ),
          <fpage>127</fpage>
          -
          <lpage>145</lpage>
          (
          <year>1968</year>
          ),
          <article-title>corrections in 5(</article-title>
          <year>1971</year>
          ) pp.
          <fpage>95</fpage>
          -
          <lpage>96</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          14.
          <string-name>
            <surname>Renggli</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gˆırba</surname>
          </string-name>
          , T.,
          <string-name>
            <surname>Nierstrasz</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Embedding languages without breaking tools</article-title>
          .
          <source>In: ECOOP 2010</source>
          . pp.
          <fpage>380</fpage>
          -
          <lpage>404</lpage>
          . Springer (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          15.
          <string-name>
            <surname>Schwerdfeger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Wyk</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Verifiable composition of deterministic grammars</article-title>
          .
          <source>In: Proc. of ACM SIGPLAN Conference on Programming Language Design and Implementation</source>
          (PLDI). ACM Press (
          <year>June 2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          16.
          <string-name>
            <surname>Sloane</surname>
            ,
            <given-names>A.M.:</given-names>
          </string-name>
          <article-title>Lightweight language processing in kiama</article-title>
          . In: Proc.
          <article-title>of the 3rd summer school on Generative and transformational techniques in software engineering III (GTTSE 09)</article-title>
          . pp.
          <fpage>408</fpage>
          -
          <lpage>425</lpage>
          . Springer (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          17.
          <string-name>
            <surname>Tobin-Hochstadt</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>St-Amour</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Culpepper</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Flatt</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Felleisen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Languages as libraries</article-title>
          .
          <source>In: PLDI 2011</source>
          . pp.
          <fpage>132</fpage>
          -
          <lpage>141</lpage>
          . ACM, New York, NY, USA (
          <year>2011</year>
          ), http://doi.acm.
          <source>org/10</source>
          .1145/1993498.1993514
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>