=Paper=
{{Paper
|id=Vol-214/paper-10
|storemode=property
|title=Metamodel-based definition of interaction with visual environments
|pdfUrl=https://ceur-ws.org/Vol-214/paper10.pdf
|volume=Vol-214
|dblpUrl=https://dblp.org/rec/conf/models/BottoniGL06
}}
==Metamodel-based definition of interaction with visual environments==
Metamodel-based definition of interaction with visual
∗
environments
Paolo Bottoni Esther Guerra Juan de Lara
University “La Sapienza” Universidad Carlos III Universidad Autónoma
Dep. Computer Science Dep. Computer Science Dep. Computer Science
Rome, Italy Madrid, Spain Madrid, Spain
bottoni@di.uniroma1.it eguerra@inf.uc3m.es jdelara@uam.es
ABSTRACT with reference to the constraints embodied in D. To this
Metamodel approaches to building visual environments are end, we rely on previous work separately developed by the
becoming common in the field of domain specific visual lan- authors aimed at defining metamodels for diagrammatic lan-
guages, mainly focusing on the definition of visual editors guages syntax and semantics [3] and integrating some level
and of simulation environments. Recent efforts tackle the of formality in the management of user interactions [5]. In
generation of complex interaction management both in the particular, we exploit the notion of family of diagrammatic
editing and in the executing phases. We present an ap- languages [2, 3] and put it to work in combination with
proach to interaction specification which takes into account event-driven grammars [5], which were originally not related
metamodel information both on the objects that can be ma- to the management of spatial relations. This kind of graph
nipulated and on the spatial relations among them. Interac- grammars supports a stratified view of interaction events
tion dynamics are defined through a visual, declarative and where patterns of user-generated events can be mapped to
formal notation based on graph grammars. more refined high-level ones, which can in turn produce cas-
cading effects. This allows a complete configurability of the
user interface, so that different styles of interaction can be
Categories and Subject Descriptors used to produce the same effect, or a same user-generated
Software [SOFTWARE ENGINEERING]: Design Tools
event can produce different effects, according to the interface
and Techniques—User Interfaces
modality. Relations between low- and high-level events can
be dynamically modified by substituting a grammar with
General Terms another, without having to reprogram the event listeners.
Design, Human Factors, Theory
Paper organization. After presenting related work on for-
1. INTRODUCTION mal definition of user interaction in Section 2, we discuss the
Metamodelling frameworks for diagrammatic languages def- integration of the I and D metamodels in Section 3. Section
inition and management are particularly exploited to con- 4 presents event-driven graph grammars and discusses their
struct environments for Domain Specific Visual Languages use to manage different levels of abstraction in event defini-
(DSVLs), as they allow a rapid implementation of visual tion. Finally, Section 5 presents an application to the man-
environments based on some abstract notion of visual enti- agement of the containment relation between nested states
ties and of relations among them [3, 4]. These environments in a simple variant of Statecharts, while Section 6 draws
must support interaction to create visual sentences in the de- conclusions and points to directions for future work.
fined language and to manipulate the entities in them. The
different interaction techniques should be constrained to al- 2. RELATED WORK
low only transformations complying with the metamodel. A formal model of interactive tasks and components must lie
However, current approaches rely on standard forms of in- at the basis of every proposal for their integration and man-
teraction, typically recurring to mechanisms provided by the agement. We do not consider here task-related formalisms,
implementation language, with little or no formal reference such as ConcurTaskTrees [6], and concentrate on models of
to the metamodel. This imposes rigidities on the possible in- components and abstract interaction.
teractions, or forces to explicitly programming the different
alternatives for activating the same transformation. Models of interaction control are either centralized, with
some high-level machine driving legal interaction, (e.g. [9]),
We propose here the integration of a metamodel for interac- or distributed, by associating with every interactive compo-
tion – I – with one for diagrammatic languages – D – thus nent its own control mechanism. A formal approach to the
decoupling the definition of low-level user-generated events definition of centralized interaction control is proposed in
from that of abstract high-level visual actions, to be served [1], leading to the definition of verifiable finite state systems
∗Work supported in part by the EC’s Human Potential Pro- from a Visual Event Grammar. An important effort in the
gramme under contract HPRN-CT-2002-00275, SegraVis, direction of distributed control is the proposal of Interactive
and the Spanish Ministry of Science and Education, projects Cooperative Objects (ICOs) [8], which encapsulate state and
MD2 (TIC200303654) and MOSAIC (TSI2005-08225-C07- behaviours, modelled through Petri nets, of Virtual Reality
06). objects with reference to the events that can have effect
on them and the way in which they react to these events
(business logic processes and rendering algorithms). ICOs
abstract from the specific devices through which events are
produced, by mapping generated events to services manag-
ing them.
The Abstract User Interaction (AUI) approach to graphi-
cal user interfaces [11] maps concrete user interactions, de-
pending on different devices and style choices, to abstract
ones. Realisations of concrete interaction techniques for an
abstract interaction are provided on request, in a lazy func-
tional style. The consequences of an interaction are mod-
elled through calls to external functions. AUI is mainly
aimed at device-independence, but still attaches computa- Figure 1: Metamodel for Diagrammatic Languages.
tions to low-level events. In a similar way, [10] proposes an
abstract definition of interface components as composition
of platform independent widgets and views, to be mapped Figure 2 shows the metamodel I for interaction. Low-level
to their concrete realisation on a platform dependent model. events are received by some interaction support. Let Σ be
the set of IdentifiableElement concrete subclasses. A
While we capitalize on the distinction between concrete and typical set of low-level events for Σ is Low = {click <
abstract events, we allow for a wider scope of supported in- X, x, y, time > [mod](σ), drag < X, x, y, time > [mod](σ),
teractions, as we rely on a metamodel including the base drop < X, x, y, time > [mod](σ)}, where [mod] indicates
classes for the visual elements in a DSVL, for the spatial some combination of key modifiers and X, x, y, and time
relationships between these elements, and for the GUI ele- indicate a mouse button, a screen position and a time re-
ments with which the end user can interact, as well as classes spectively. Moreover, general scope events can occur on the
for the different kinds of events and actions. Having an canvas on which visual entities are depicted or on the GUI
explicit representation of actions resembles the concept of elements. The actual set of events depends on the charac-
“action languages”, which are becoming popular to express teristics of the underlying event support system.
the semantics of metamodel-based languages [7]. However,
our approach adds flexibility in that modelling the seman-
tics of events by means of graph grammar rules allows the
expression of complex conditions on their management as
subgraphs in the left-hand side of a rule.
3. METAMODEL INTEGRATION
In a diagrammatic language, significant spatial relations ex-
ist among identifiable elements. The latter are recognizable
entities in the language, to which a semantic role can be
associated, each element being univocally materialized by
means of a complex graphic element. Each such element is
composed in turn of simpler graphic elements, each possess- Figure 2: Metamodel for Interaction.
ing one or more attach zones, which define its availability
to participate in different spatial relations, such as contain- At a higher level, a set Act of visual actions defines the
ment or touching. The existence of a spatial relation with types of significant interactive actions a user can perform,
semantic relevance between two elements is assessed via the again related to the identifiable elements to be manipu-
predicate isAttached() implemented by each realisation of lated. Typically, Act is such that ∀σ ∈ Σ, Act ⊃ {create(σ),
AttachZone. Figure 1 shows the metamodel D including swapSelect(σ), delete(σ), move(σ), resize(σ), query(σ)}, hav-
these concepts. For space constraints, an abridged version ing omitted action-specific parameters. Act can be recur-
is presented (for a complete presentation, see [2, 3]). In par- sively enriched by letting designers define new types of ac-
ticular, we restrict our analysis to direct spatial relations, tions, based on events in Low and Act.
which are here always regarded as binary, without consid-
ering emergent relations, such as those derived from closure We introduce a new type of high-level events for spatial rela-
of direct ones. A visual environment also contains GUI ele- tions, such that a (direct) spatial relation can be brought to
ments, e.g. buttons. Typically, an automatically generated bear, or cease to exist, between two identifiable elements, as
environment would contain a button for creating each ele- an effect of such events, constituting a set RelAct. Let Θ be
ment of the DSVL alphabet Σ, as well as buttons for per- the set of subtypes of SpatialRelation. We have that ∀θ ∈
forming visual actions (e.g. selecting, moving or deleting). Θ, ∀σ1 , σ2 ∈ Σ such that instances of θ relate pairs of type
(σ1 , σ2 ), RelAct ⊃ {install(θ, σ1 , σ2 ), remove(θ, σ1 , σ2 )}. We
In order to relate interaction concepts with those in D, we adopt here event-driven graph grammars both to describe
introduce a notion of interaction support. Low-level events the mapping of user-generated events (in Low) to high-level
generated on an interaction support are thus related to the events (in Act ∪ RelAct) and to specify their effects.
corresponding visual element. Their effects can extend to
other elements, for example by navigating spatial relations. 4. EVENT-DRIVEN GRAPH GRAMMARS
Event-driven graph grammars were proposed in [5] as a entities,. The user has already entered the MOVE modality
means to handle user interaction in the editing phase. They by selecting the corresponding button, which provokes the
make the events that the visual elements can receive explicit, deselection of any selected identifiable element in the canvas.
and model the actions to be done upon event generation via The first rule selects the clicked element and deletes the click
rules. An event-driven grammar is made of pre-rules and event. The second rule looks for a click event which is not
post-rules to be applied before and after the event is actu- associated to the interaction support area of any element
ally executed, and complements the DSVL metamodel with (i.e. the click was on the canvas), and then generates a
interaction dynamics. Move visual action associated with the selected element.
Event management occurs in five steps. First, the user gen-
erates an event on an interaction support, which is attached
to the associated element. Then, pre-rules are executed as
long as possible. They can modify model elements and pro-
duce and delete events. Hence, they can be thought of as
pre- conditions (failing which the event is deleted and not ex-
ecuted) and pre-actions for a given event. In the third step,
the existing event(s) are actually executed. At this point,
zero or more events may be associated to various model el-
ements. In the fourth step, the post-rules are executed as Figure 4: Movement in Point and Click Modality.
long as possible. Finally, the events are deleted.
When generating an environment, the DSVL designer can
Event-driven rules may contain instances of abstract classes choose between these interaction modalities. In addition,
in their left and right hand sides (LHS and RHS respec- the high-level actions can be interpreted in different ways
tively). Although no instance of abstract classes can be by different sets of rules, providing an additional degree of
present in the model, “abstract objects” in rules can be customization, as shown in the next Section.
matched to any concrete subclass instance. This feature
makes rules more compact and reusable. For example, rules 5. AN EXAMPLE: MANAGING CONTAIN-
specifying the behaviour of Containers will be valid for any
language containing entities that inherit from this class.
MENT IN STATECHARTS
In this Section we show how to customize containment han-
dling in an environment for a simplified version of Hierarchi-
Figure 3 shows a pre-rule that transforms a low-level event
cal State Machines, defined by the metamodel of Figure 5.
into a visual, high-level action, modelling entity movement
The visual entities refine the relevant classes of containment-
in a drag and drop interaction modality. We use a compact
and connection-based families of languages [3]. In particu-
notation for the rules, in which LHS and RHS are presented
lar, a State may act both as a container for other states, and
together. The elements to be added by the rule application
as source or target of a connection (the Transition). Class
are shown in bold, and those to be removed in dashed lines.
Substate is introduced as a specialization of Contains, whose
For rules with negative application conditions (NACs), the
instances relate pairs of instances of State.
elements that should not be present are shown in a gray
area. Finally, if an attribute of an entity is modified, its
value appears as a tuple containing the values before and
after the rule application.
Figure 3 shows a pre-rule checking whether some identifiable
element has received a Drop low-level event in its interaction
support, while the “select” button in the user interface is
selected. In this case, it generates a Move high-level visual
action associated with the element. This rule uses “abstract
Figure 5: Metamodel for Statecharts.
objects” and therefore is valid for any DSVL environment.
The DSVL designer can configure specific behaviours for
handling the different spatial relations (containment, align-
ment, adjacency, etc.) We present an example where we
model different behaviours that may occur when one ele-
ment is moved outside a container. In the first one, the
containee is disconnected from the container. In the second,
the container is resized to accommodate the new position of
the containee. In the third one, we forbid a containee to be
moved outside the container. The main idea is that the fact
Figure 3: Movement in Drag and Drop Modality. of moving a containee outside a container will produce the
user-defined event “Take Out”. Hence, we make available
Event-driven rules can model other interaction modalities three rules interpreting the event in three different ways.
(like point and click or grasp and stretch) rewriting patterns
of low-level events into high-level actions. Figure 4 shows a Figure 6 shows a pre-rule that generates the take-out user-
set of rules modelling a point and click behaviour for moving defined event when an element is moved outside its con-
tainer. As this is a pre-rule, the Move action has not been their inclusion in the elements manageable through event-
performed yet. This rule has a NAC that forbids applying driven grammars is a further step towards a complete formal
the rule more than once in a row. definition of the user interactions.
The approach is being integrated into the ATOM3 architec-
ture. Future work will study other interaction phenomena.
For example, the issue of grouping elements and defining
actions which simultaneously affect all of them can be tack-
led by viewing all the elements as contained in a temporary
dummy container. Other challenges include global phenom-
ena, such as context switch or complex layout redefinition.
7. REFERENCES
Figure 6: Generation of a User-Defined Action. [1] J. Berstel, S. Crespi-Reghizzi, G. Roussel, and P. San
Pietro. A scalable formal method for design and
automatic checking of user interfaces. ACM
Figure 7 shows three different interpretations for the take-
Transaction on Software Engineering and
out event. The Detach containee rule removes the event
Methodology, 14(2):124–167, 2005.
and adds a Remove spatial action. The Resize container
rule replaces the event by a resize attached to the container, [2] P. Bottoni and G. Costagliola. On the definition of
with the appropriate coordinates for resizing. Finally, rule visual languages and their editors. In Diagrams, pages
Not allowed deletes both the take-out and the Move event 305–319, 2002.
of the containee, thus preventing its movement.
[3] P. Bottoni and A. Grau. A Suite of Metamodels as a
Basis for a Classification of Visual Languages. In Proc.
of 2004 IEEE VL/HCC, pages 83–90. IEEE CS, 2004.
[4] J. de Lara and H. Vangheluwe. AToM3 : A Tool for
Multi-Formalism Modelling and Meta-Modelling. In
Proc. of FASE’2002, volume 2306 of LNCS, pages
174–188. Springer, 2002.
[5] E. Guerra and J. de Lara. Event-Driven Grammars:
Towards the Integration of Meta-Modelling and Graph
Transformation. In Proc. of ICGT’2004, volume 3256
Figure 7: Three Interpretations of “Take Out”. of LNCS, pages 54–69. Springer, 2004.
[6] G. Mori, F. Paternò, and C. Santoro. CTTE: Support
A movement of a state that concludes by placing it within a
for Developing and Analyzing Task Models for
different container leads to the installation of a new spatial
Interactive System Design. IEEE Trans. Software
relation between the moved state and the destination one.
Eng., 28(8):797–813, 2002.
In this case a new user-defined event, named “Place In”,
would be generated in a pre-rule analogous to Figure 6. [7] P. Muller, P. Studer, F. Fondement, and J. Bezivin.
Platform independent web application modeling and
6. CONCLUSIONS AND FUTURE WORK development with netsilon. Software and System
We have presented a novel approach to the definition of Modeling, 4(4):424–442, 2005.
interaction modalities for DSVL environments based on a [8] D. Navarre, P. A. Palanque, R. Bastide, A. Schyn,
metamodel. Low-level events and high-level actions are taken M. Winckler, L. P. Nedel, and C. M. D. S. Freitas. A
into account, together with spatial relations and DSVL con- formal description of multimodal interaction
cepts. Different behaviours can be customized by means of techniques for immersive virtual reality applications.
rules. The definition of complex interaction patterns is sup- In INTERACT, volume 3585 of LNCS, pages 170–183.
ported by separating the processing of events from a man- Springer, 2005.
agement policy for all their envisaged consequences.
[9] G. D. Penna, B. Intrigila, and S. Orefice. An
The advantages of such an approach are manifold. First, environment for the design and implementation of
using graph grammars enables a formal approach to inter- visual applications. J. Vis. Lang. Comput.,
action definition which is also visual and declarative, thus 15(6):439–461, 2004.
favoring reuse of rules and reasoning on them. Moreover, as [10] T. Schattkowsky and M. Lohmann. Towards
rules are defined at an abstract level, they are independent Employing UML Model Mappings for Platform
of the concrete user interface. This decoupling of device- Independent User Interface Design. In MDDAUI,
originated events from low-level abstract events, and from volume 159 of CEUR Workshop Procs., 2005.
the management of high-level actions, facilitates the defi-
nition of configurable and adaptive environments. Event- [11] K. A. Schneider and J. R. Cordy. Abstract user
driven graph grammars provide a visual, formal semantics interfaces: A model and notation to support plasticity
to an action language for DSVL environments. The integra- in interactive systems. In C. Johnson, editor, DSV-IS,
tion of different GUI elements (such as buttons) into D and volume 2220 of LNCS, pages 28–48. Springer, 2001.