<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Expeditious Approach to Modeling IDE Interaction Design</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vasco Sousa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugene Syriani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DIRO, University of Montreal</institution>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Software tools are being used by experts in a variety of domains. There are numerous software modeling editor environments (MEs) tailored to a speci c domain expertise. However, there is no consistent approach to generically synthesize a product line of such MEs that also take into account the user interaction and experience (UX) adapted to the domain. In this position paper, we propose a solution to explicitly model the UX of MEs so that di erent aspects of UX design can be speci ed by non-programming experts. Our proposal advocates the use of multi-paradigm modeling where this aspect of the design of an ME is modeled explicitly and adapted to a speci c user expert.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Software modeling refers to the use of software to model a solution for a problem
in a speci c domain (e.g., music, nance, biology). As such, there is a plethora
of software tools that enable domain experts (e.g., musicians) to represent,
manipulate, and simulate models using notations from the domain, speci ed at
a suitable level of abstraction (e.g., a music sheet) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In the programming
domain, such tools are often called to Integrated Development Environments
(IDEs), like for example Eclipse.
      </p>
      <p>
        Model-Driven Engineering (MDE) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is a generic approach that has proved
to be able to generate IDEs in a variety of domains. A main aspect of MDE
is characterized by creating a domain-speci c modeling language (DSL) that
de nes the domain, a modeling editor environment (ME) to produce models
based on the DSL, and tools that can manipulate these models. The creation
of a DSL has been traditionally separated into two aspects: the speci cation
of the abstract syntax (AS) de nes the components and structure of the
language, and the speci cation of the concrete syntax (CS) de nes symbols and
notations associated to AS elements. For instance, if we consider modeling of a
music sheet, concepts of note and tempo are part of the AS, and the CS is the
way we represent them, e.g., and R . The use of a ME provides the means to
create and manipulate these domain-speci c models, making sure they conform
to their language and providing feedback on the modeling process, such as only
allowing musical symbols and making sure they are correctly placed on the music
sheet. The user experience (UX) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is the set of actions, interfaces, and feedback
that characterizes the interactions of a user with the software. The creation and
manipulation of models, as well as the UX of the ME is the focus of this work.
      </p>
      <p>
        In most current approaches to ME development, such as the Eclipse Modeling
Framework (EMF) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and MetaEdit+ [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] the user interaction is restricted to
the set of interactions generically built-in the IDE, available as is to all
domainspeci c environments generated from it. These include, for example, the use of
a toolbar with common operations, e.g., copy and paste, and a sidebar with the
list of modeling components, in the form of buttons, to select and place them
on the main modeling area. Furthermore, in these modeling frameworks, the
ME is generated from the AS and the CS. Therefore any further customization
requires manual introduction of code into generated code, as pointed out in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
However, the user interaction with the ME di ers greatly in non-programming
domains, which are not the main focus of IDEs like Eclipse. For example, the
alignment of the notes is crucial in an ME for music sheets. Thus, the modeler
needs to understand the internals of the framework instead of intuitively de ning
how musicians expect to interact with a music sheet. This is one of the main
practical reasons why domain experts continue to use IDEs dedicated to their
domains, instead of IDEs generated from generic MDE frameworks.
      </p>
      <p>
        With the popularity of new computational devices and peripherals at the
expense of traditional desktop environments, such as tablets with varying sizes,
virtual and augmented reality goggles, and collaborative interactive tables, new
human user interactions will keep on appearing. It becomes clear that the
interaction with ME, needs to be adapted in new ways. For instance, works such
as [
        <xref ref-type="bibr" rid="ref1 ref13">1,13</xref>
        ] strive to expand user interactions beyond that of traditional means, in
ways to tackle UX issues, and grasp other requirements and ways of expressing
user interactions.
      </p>
      <p>
        In this position paper, we propose the idea to promote the UX into an explicit
component of ME creation, at the same level as the AS and CS of a DSL. This
would allow us to tap into knowledge from the UX domain [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and be able to
de ne families of interactions that provide bases of adaptation of the ME to
di erent devices and user pro les. To achieve this, we need to express these
interactions in a platform-independent manner, with terms that are adapted
to UX experts and domain experts in addition to software developers, and to
transform these descriptions of interaction into platform-speci c interactions for
the deployment of the ME on diverse mediums. Our proposal advocates the use
of multi-paradigm modeling where is aspect of the design of an ME is modeled
explicitly and adapted to a speci c user expert.
      </p>
      <p>In Section 2, we brie y discuss about work done on the development of user
interfaces. In Section 3, we extend the MDE approach for generating MEs by
tackling the speci cation of the UX. We illustrate our approach with a simple
music modeling language example and its editor. Finally, we discuss our roadmap
in Section 4.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        The closest work to addressing the interaction issue in some standard or formal
way is the recent standardization of the Interaction Flow Management
Language (IFML) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] by the Object Management Group (OMG). IFML was born
from works on modeling web applications[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Although UX was not intended to
be within the scope of IFML and its precursors, we still consider it as related
work since it covers some segments of the interaction portion of UX. Its goal is to
allow software developers to specify user interactions, by describing its
components, state objects, references to the underlying business logic and its data, and
the logic that controls the distribution and triggering of the interactions. Despite
this distancing from full UX, the accumulated knowledge present in IFML is still
valuable to our research.
      </p>
      <p>There are nevertheless several aspects of IFML that do not concord with
our approach. IFML describes user interfaces (UI) as constituents without any
speci cation of visual properties or design choices. This limits the description
since these properties have a large in uence on the UX and should be adapted
accordingly. IFML is targeted to Software Developers instead of UX designers
that could use their domain knowledge to better adapt the software to its users.
It promotes the combined use with other OMG standards, such as Class
Diagrams and Business Process Models, but through declaration of speci c function
calls that e ectively bind the user interaction to a speci c implementation,
obfuscating most of the abstraction e ort. It also relegates complex interactions
to be speci ed by the implementation framework instead of being an explicit
part of the interaction model. Finally, our evaluation of IFML also concluded
that it has some limitations in terms of scalability, where the complexity of
the speci cation increases greatly with each UI element that is introduced, and
the use of modules is focused on implementation reuse rather than development
simpli cation.</p>
      <p>
        In our previous work [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], we evaluated the user interaction in 25 MEs from
di erent domains. We provided metrics for measuring the usability and
suitability of domain-speci c MEs. We also investigated what ME features are needed
in which domain and how they should be presented to the user domain. The
present work relies on the results of [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] which serves as requirements for our
current work.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Proposed Solution</title>
      <p>
        Our rst step to tackle the speci cation of user interactions for a DSL is to break
it into di erent models, each focusing on a di erent aspect of user interaction
description. This allows us to specify each aspect in its own domain-speci c
model, with its own concepts and targeted to the appropriate experts. From the
knowledge acquired in [
        <xref ref-type="bibr" rid="ref12 ref6">6,12</xref>
        ], we propose a description of the UI, a description
of the ow of interactions that de nes the behavior of the interactions, a list
of interaction requirements of the ME for a speci c domain, a list of what a
speci c system provides in terms of interaction events, a mapping that binds
all of these models together, and a mapping that provides the link between
the interaction descriptions and the information needed for the platform-speci c
implementation.
3.1
      </p>
      <sec id="sec-3-1">
        <title>Music Modeling Example</title>
        <p>
          To illustrate our proposal we use a case of a music modeling language akin to
MusicXML [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], and the subsequent speci cation of the interaction and artifacts
of this ME. We choose the domain of music writing because its interactions are
diverse and complex due to the large work outside computational environments,
as well as because it removes the bias of staying in a domain close to computing,
which is the predominant domain of application in the MDE literature. This is
inspired from the MuseScore ME [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
        </p>
        <p>The language provides a form for writing music sheets. In this paper, we focus
only on a small number of interactions: various ways of placing notes on the sheet,
playing a note, and selecting a note length (half note, quarter note ...). The goal
here is to demonstrate how we address multiple interaction requirements of a
music modeling language, so that the nal ME is properly adapted to the music
domain expert and the environment he uses.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Modeling Views</title>
        <p>
          In our approach, domain-speci c MEs are not only generated form the AS and
CS, speci ed by the language modeler using common MDE techniques [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], but
also from the UX model. This model is composed of the following views.
Interface Model The Interaction Model is aimed at UX designers to de ne
the static representation of the user interface of the ME for a particular domain.
This model speci es all the visual and non-visual artifacts used to convey
information to and from a user and how they relate to each other (positioning,
overlapping, precedence, sound, vibration. . . ). A good candidate to de ne the
m*ember
        </p>
        <p>Group
{ordered}</p>
        <p>group</p>
        <p>ClipPath</p>
        <p>Marker packagedMarker
size]:]Dimension
reference]:]Point *</p>
        <p>Canvas
backgroundColor]:]Color][0..1]
*
backgroundFil 0..1 package*dFil packagedSty*le</p>
        <p>Fil Style
depicts an excerpt of the Diagram Graphics metamodel that speci es the
definition graphical elements and the segment of the metamodel that allows the
speci cation of element groups and other organizational constructs, such as the
canvas. However, these are not enough to de ne the topological placements of UI
elements, in a variety of dimensional spaces from 0D (Numerical Display, Single
speaker) to 2D (Screen, binaural Audio, Touch Tables), that we require for UI
speci cation. Thus our proposal is to extend the DD metamodel with constructs
that would allow for a better speci cation of UI, as shown in Fig. 2.
cardinalAnchors</p>
        <p>1 DD: GraphicalElement
5
Anchor
*
1
Layer
*
DD: member
{ordered}
0. 1
DD: group
TopologicalGuard *
*</p>
        <p>DD: Group
*
anchors
packagedAnchors
packagedGuards</p>
        <p>DD: Canvas</p>
        <p>Widget</p>
        <p>AudioElement
audioResource: le</p>
        <p>HapticFeedback
packagedLayers *
{ordered}</p>
        <p>1
packagedCanvas
1</p>
        <p>1 1</p>
        <p>InteractionStream</p>
        <p>We start by organizing these spaces as Interaction Streams (IS). These
represent forms of interaction with the user that are not representable in the same
space, e.g., screen elements, keyboard inputs, and sound. Fig. 3a shows an
Interaction Stream, namely the representation of screen elements for our music ME.
Fig. 3b shows a note input stream and a sound stream, both complementing the
rst stream, but with no direct visual representation as a screen element. These
abstract inputs can then be mapped onto more concrete forms of input, such as
an electronic piano keyboard, but at this level, we only allow for the inclusion
of platform-independent input.</p>
        <p>These IS are in turn organized into Layers. Layers group representational
characteristics of the UI elements such as shade, size, position, data, audio
characteristics, representations of haptic feedback, and other perceptible
characteristics, and how to group them, in the form of DD Graphic Elements (GE).</p>
        <p>These GE are then specialized into UI speci c elements such as buttons,
check boxes, drop down menus, and other UI widgets, to simplify the modeling
process. Fig. 3a shows the de nition of such GE: we can have the typical menu
and button widgets positioned at the top, with a representation of a selected
button for illustration purposes, and the remaining space as the canvas.</p>
        <p>Additionally all of these GE are extended with a set of cardinal anchors and
a central anchor. Alongside these automatic anchors, manual unmovable anchors
and guides can be placed in the model. These allow us to have topological
constraints in the form of Guards between any such anchors, including di erent
anchors of the same GE. Through these topological constraints, we can de ne

</p>
        <p>Edit




 


</p>
        <p>Help


2RSSTU-+$%&amp;(-$%&amp;(RSSTU-$%&amp;(-$%&amp;(-$%&amp;(RSSTU-$%&amp;(-$%&amp;(-$%&amp;(RSSTU-$%&amp;(-$%44-4444=</p>
        <p>(a) UI screen model example
C1 C#1 D1 D#1 E1 F1 F#1 G1 G#1 A1 A#1 B1</p>
        <p>C2 C#2 D2 D#2 E2 F2 F#2 G2 G#2 A2 A#2 B2
(b) UI abstract input and sound output model examples
positional and scale restriction on elements, minimum and maximum distances
between elements, minimum and maximum dimensions, and automatic
alignments. This is achieved by adjusting the movable anchors pointed by the Guard
so that its constraint expression is satis ed. The metamodel presented is further
constrained by well-formlessness rules not represented here, such as: the number
of Position elements of an Anchor matches the number of dimensions of that
particular Interaction Stream.</p>
        <p>For representation purposes, we also show the positioning of CS elements
from the Music Modeling DSL in the canvas. These CS elements align at the
top left anchor in the canvas and also conform to the Music Modeling DSL
CS and AS speci cations. The two remaining elements on the canvas are the
representations of a cursor as a line with a marker over the CS, and a pointing
device as a large cross.</p>
        <p>List of Essential Actions This is a documentation model, contains a
listing of all Essential Actions the ME must perform. These are common actions
(e.g., copy, paste, save) and actions speci c to a modeling language (e.g.,
instantiate elements in particular ways, perform checks at key points of the modeling
process). For our example, we only consider copy and paste actions, as well as
actions speci c to music modeling: placing a note, playing a note, and selecting
a note type. This model is populated by a domain expert to de ne the actions
speci c to a language and by a UX designer to de ne the common actions.
List of System Events This documentation model lists all System Events
provided from the target platform (e.g., operating system). For example, for a
computer with a touch screen, mouse , and keyboard, events such as OnLeftClick,
OnTouch and OnKeyPress can be expected. This list presents platform-speci c
events that will trigger a particular behavior of the ME when received. The list is
populated by the software developer or architect responsible for the deployment
on a given implementation.</p>
        <p>Behavior Model This model expresses the logic of actions expected to take
place while interacting with the ME. It de nes how the Interface Model reacts
to Essential Actions received.</p>
        <p>
          We propose to de ne behavior models in a hybrid formalism composing
Statecharts (SC) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] with model transformation [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. We use SC because it elegantly
describes systems that are reactive in nature, such as MEs. Furthermore, SC
renement [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] gives the modeler the possibility to specify a generic behavior of the
ME model in the form of a SC with gaps to be specialized and lled-in per DSL
and user pro le. Model transformation allows us to de ne these specializations
in a rule-based declarative way, using CS and Interface Model elements. This
makes it easier for UX and domain experts to describe this specialization.
Behavior Model Formalism We brie y describe the hybrid formalism of SC and
model transformation in an informal way. In our approach, we assume there is
a SC that models generically any ME and de nes generic operations such as
opening and closing the ME and other basic operations. Additionally, it
contains abstract states that act as placeholders to specify behaviors speci c to the
domain of the ME. Specializations of this SC is done by embedding well-formed
SCs following the rules and conditions in [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. At this point we opt for a restrictive
approach to these specializations to guarantee the soundness of the UX.
        </p>
        <p>Pre
conditions</p>
        <p>External</p>
        <p>Internal*</p>
        <p>Post
conditions</p>
        <p>Goal
conditions</p>
        <p>
          Fig. 4 illustrates the structure of a specialization model: a SC to be
embedded in another generic one for specialization purposes. There are three kinds of
states: pre-condition, post-condition, and goal condition states. States contain
CS and Interface Model elements that specify the condition for a transition to be
triggered. Transitions can be external or internal. External transitions are
triggered by Essential Actions performed by the user and/or perform actions that are
perceived by the user. These transitions are to be mapped to platform-speci c
System Events. Internal transitions are triggered by actions internal to the scope
of the system and do not result from a human interaction, such as the passage of
time and transitions that are immediately triggered as soon as the source state
is reached. The semantics of the application of a transition t is as follows: if the
pre-condition of t is satis ed and the corresponding event is received, then the
system should update itself to satisfy the post- or goal condition. A pre-condition
state represents the left-hand side pattern of a model transformation rule that
shall be matched over the current state of the ME. A post-condition state
represents the right-hand side pattern of a model transformation rule that must be
satis ed after the event is received. Post-conditions also serve as pre-conditions
of following transitions. Goal conditions special kinds of post-conditions that
must be satis ed only when an internal event is received. Specialization
models must always start with an external transition. Conditional, hierarchical, and
parallelization constructs of SC, such as branching, forking/joining, and
ORand AND-states are all supported [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p></p>
        <p>This approach allows us to easily de ne concurrent interactions, through the
use of AND-states in the re nement of the generic behavior SC. In Fig. 5a and 5b
we have two di erent interactions sequences to place a note on the music sheet.
In Fig. 5a we have the note placement with a pointing device, where a UI speci c
button element that will relate to note length is selected and the corresponding
CS element is placed on the pointed place of the music sheet as an external
action. In Fig. 5b we again have the selected length note placement, but instead
it occurs on a note activation (note B1 is activated) and the corresponding CS
element is placed by the external action wherever the cursor is currently placed,
followed by an internal action that advances the cursor in a linear fashion. The
note placement of Fig. 5a can then be used with any pointing device, such as a
mouse or a touch screen. The note placement of Fig. 5b can be used with any
device that allows the direct referencing of a note, such as a musical keyboard.</p>
        <p>Fig. 5c shows the interaction of using a pointing device when a note is already
placed at that point of the music sheet. Instead of an interaction with the CS
by placing a new note as an external action, we play the sound of that note.
This means that the play interaction will be placed on all states that stem from
the placement interaction. Once the tone for that note has nished playing an
internal action triggered by the system advances to an idle state.</p>
        <p>The Behavior Model is aimed at UX designers that tailor the user experience
to the domain expert users of the ME.
In addition to all the model views of the interaction, we have a mapping between
the platform-independent Essential Actions (EA) and the platform-speci c
System Events (SE). This mapping is achieved through the left total function
deploy : EA ! SE. This relation can be directly established if EAs are uniquely
de ned, so that their context information (state of the ME) and UI elements are
accessible through the EA information.</p>
        <p>SE</p>
        <p>teo tre
EA tce
lS laP on
e
ceN ionP
tceeoN rrsouC y
laP on l
a</p>
        <p>P</p>
        <p>The mapping is speci ed in the form of a table, as shown in Table 1. It
is produced by the UX designer to establish the relation between
platformindependent external actions that interact with the user de ned in the list of
essential actions and real world actions translatable by the platform-speci c
system events. For example, in Table 1, we map multiple SEs (OnTouch and
OnLeftClick) onto the same EA (Select) to provide alternative interactions and
to adapt it to a particular device. Because each action is tied to its own context,
the mapping of the same SE (OnLeftClick) can be mapped to di erent EAs
(Select, Place Note on Pointer, Place Note on Cursor) without con ict.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>In this position paper, we motivated the need to address the issue that generated
MEs currently su er from: the lack of adaptation of its behavior to improve the
UX of its domain expert users. Following a multi-paradigm modeling approach,
we proposed to explicitly model the UX of a DSL in order to generate a ME where
the domain user will feel completely immersed with interactions and utilization of
the tool adapted to what he is accustomed to. The AS and CS of the DSL specify
the syntax of valid models. The Interface Model represents the representation of
the user interface. The CS of the DSL, Interface Model, and Essential Actions
are used to de ne the Behavior Model of the ME. System Events are mapped to
Essential Actions. All of these models are necessary to the synthesize MEs where
the UX is adapted to domain on a platform using specialized I/O peripherals.</p>
      <p>Our future work is to complete the full implementation of a prototype so
we can start validating and improving our solution with real-world experts from
di erent domains. We also plan to investigate di erent interaction streams that
go beyond visual screen rendering, such as audio, video animation, and tactile
haptic sensing supported by appropriate hardware devices. Furthermore, we plan
to formalize the di erent models and formalisms outlined here as to provide
precise feedback to the users for inconsistencies, well-formlessness and other
design aws.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>F.</given-names>
            <surname>Alonso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Fuertes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Gonzalez</surname>
          </string-name>
          , and
          <string-name>
            <surname>L.</surname>
          </string-name>
          <article-title>Mart nez. User-Interface Modelling for Blind Users</article-title>
          .
          <source>In Computers Helping People with Special Needs</source>
          , volume
          <volume>5105</volume>
          <source>of LNCS</source>
          , pages
          <volume>789</volume>
          {
          <fpage>796</fpage>
          . Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>B.</given-names>
            <surname>Buxton</surname>
          </string-name>
          .
          <article-title>Sketching User Experiences</article-title>
          . Interactive Technologies. Morgan Kaufmann, Burlington,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>S.</given-names>
            <surname>Ceri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fraternali</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Bongio</surname>
          </string-name>
          .
          <article-title>Web modeling language (webml): A modeling language for designing web sites</article-title>
          .
          <source>Comput. Netw.</source>
          ,
          <volume>33</volume>
          (
          <issue>1-6</issue>
          ):
          <volume>137</volume>
          {
          <fpage>157</fpage>
          ,
          <year>June 2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>A.</given-names>
            <surname>El Kouhen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gherbi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Dumoulin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Boulet</surname>
          </string-name>
          , and
          <string-name>
            <surname>S. Gerard.</surname>
          </string-name>
          <article-title>MID: A MetaCASE Tool for a Better Reuse of Visual Notations</article-title>
          .
          <source>In System Analysis and Modeling: Models and Reusability</source>
          , volume
          <volume>8769</volume>
          <source>of LNCS</source>
          , pages
          <volume>16</volume>
          {
          <fpage>31</fpage>
          . Springer,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>R.</given-names>
            <surname>France</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Rumpe</surname>
          </string-name>
          .
          <article-title>Model-driven Development of Complex Software: A Research Roadmap</article-title>
          .
          <source>In Future of Software Engineering</source>
          , pages
          <volume>37</volume>
          {
          <fpage>54</fpage>
          ,
          <string-name>
            <surname>Minneapolis</surname>
          </string-name>
          , may
          <year>2007</year>
          . IEEE Computer Society.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>M. B. Fraternali</surname>
          </string-name>
          .
          <article-title>Interaction Flow Modeling Language</article-title>
          . OMG,
          <year>February 2015</year>
          . Version 1.0.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>M.</given-names>
            <surname>Good</surname>
          </string-name>
          .
          <article-title>MusicXML for Notation and Analysis</article-title>
          .
          <source>In The Virtual Score: Representation</source>
          , Retrieval, Restoration, volume
          <volume>12</volume>
          of Computing in Musicology, pages
          <volume>113</volume>
          {
          <fpage>124</fpage>
          . MIT Press, Cambridge MA,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>C.</given-names>
            <surname>Hansen</surname>
          </string-name>
          , E. Syriani, and
          <string-name>
            <given-names>L.</given-names>
            <surname>Lucio</surname>
          </string-name>
          .
          <article-title>Towards Controlling Re nements of Statecharts</article-title>
          .
          <source>In Software Language Engineering Posters</source>
          , volume CoRR: abs/1503.07266 of SLE '
          <volume>13</volume>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>D.</given-names>
            <surname>Harel</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Naamad</surname>
          </string-name>
          .
          <article-title>The STATEMATE semantics of statecharts</article-title>
          .
          <source>Transactions on Software Engineering and Methodology</source>
          ,
          <volume>5</volume>
          (
          <issue>4</issue>
          ):
          <volume>293</volume>
          {333, oct
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>S.</given-names>
            <surname>Kelly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lyytinen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Rossi</surname>
          </string-name>
          .
          <article-title>MetaEdit+ A fully con gurable multi-user and multi-tool CASE and CAME environment</article-title>
          . In J. Iivari,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lyytinen</surname>
          </string-name>
          , and M. Rossi, editors,
          <source>Conference on Advanced Information Systems Engineering</source>
          , volume
          <volume>1080</volume>
          <source>of LNCS</source>
          , pages
          <volume>1</volume>
          {
          <fpage>21</fpage>
          ,
          <string-name>
            <surname>Crete</surname>
          </string-name>
          , may
          <year>1996</year>
          . Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. Object Management Group. Diagram De nition,
          <source>Version</source>
          <volume>1</volume>
          .1,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>J. M. Rouley</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Orbeck</surname>
            , and
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Syriani</surname>
          </string-name>
          .
          <article-title>Usability and Suitability Survey of Features in Visual IDEs for Non-Programmers. In Evaluation and Usability of Programming Languages and Tools</article-title>
          ,
          <source>PLATEAU'14</source>
          , pages
          <fpage>31</fpage>
          {
          <fpage>42</fpage>
          . ACM, oct
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>A.</given-names>
            <surname>Savidis</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Stephanidis</surname>
          </string-name>
          .
          <article-title>Developing Dual User Interfaces for Integrating Blind and Sighted Users : the HOMER UIMS</article-title>
          .
          <source>In CHI'95 Proceedings. ACM</source>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>M.</given-names>
            <surname>Shinn</surname>
          </string-name>
          . Instant MuseScore. Packt Publishing Ltd,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>D.</given-names>
            <surname>Steinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Budinsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Paternostro</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. Merks.</surname>
          </string-name>
          <article-title>EMF: Eclipse Modeling Framework</article-title>
          .
          <source>Addison Wesley Professional, 2nd edition</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16. E. Syriani, Hans Vangheluwe, and
          <string-name>
            <given-names>B.</given-names>
            <surname>LaShomb. T-Core</surname>
          </string-name>
          :
          <article-title>A Framework for Custombuilt Transformation Languages</article-title>
          .
          <source>Journal on Software and Systems Modeling</source>
          ,
          <volume>14</volume>
          (
          <issue>3</issue>
          ):
          <volume>1215</volume>
          {
          <fpage>1243</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>