=Paper= {{Paper |id=Vol-214/paper-1 |storemode=property |title=A Model-Driven Approach to the Engineering of Multiple User Interfaces |pdfUrl=https://ceur-ws.org/Vol-214/paper1.pdf |volume=Vol-214 |dblpUrl=https://dblp.org/rec/conf/models/Botterweck06a }} ==A Model-Driven Approach to the Engineering of Multiple User Interfaces== https://ceur-ws.org/Vol-214/paper1.pdf
                                 A Model-Driven Approach to
                          the Engineering of Multiple User Interfaces
                                                                          Goetz Botterweck
                                                                   University of Koblenz-Landau
                                                                         Universitätsstr. 1
                                                                        D-56070 Koblenz
                                                                        +49 261 2872531
                                                                botterweck@uni-koblenz.de

ABSTRACT                                                                            A similar challenge is the derivation of structures in a new model
In this paper, we describe MANTRA1, a model-driven approach                         based on information given in another existing model. Many task-
to the development of multiple consistent user interfaces for one                   oriented approaches use requirements given by the task model to
application. The common essence of these user interfaces is                         determine UI structures; for example, temporal constraints similar
captured in an abstract UI model (AUI) which is annotated with                      to the ones in our approach have been used to derive the structure
constraints to the dialogue flow. We consider in particular how                     of an AUI [9] or dialogue model [6].
the user interface can be adapted on the AUI level by deriving and                  Florins et al. [5] take an interesting perspective on a similar prob-
tailoring dialogue structures which take into account constraints                   lem by discussing rules for splitting existing presentations into
imposed by front-end platforms or inexperienced users. With this                    smaller ones. That approach combines information from the AUI
input we use model transformations described in ATL (Atlas                          and the underlying task model – similar to our approach using an
Transformation Language) to derive concrete, platform-specific                      AUI annotated with temporal constraints which are also derived
UI models (CUI). These can be used to generate implementation                       from a task model.
code for several UI platforms including GUI applications,
                                                                                    Many model-driven approaches to UI engineering have proposed
dynamic websites and mobile applications. The generated user
                                                                                    a hierarchical organization of interaction elements grouped to-
interfaces are integrated with a multi tier application by
                                                                                    gether into logical units (e.g., [4]). A number of approaches to
referencing WSDL-based interface descriptions and com-
                                                                                    multiple user interfaces has been collected in [12].
municating with the application core over web service protocols.
                                                                                    3. ABSTRACT DESCRIPTION OF USER
1. INTRODUCTION
An elementary problem in user interface engineering is the com-
                                                                                    INTERFACES
plexity imposed by the diversity of platforms and devices which                     The MANTRA model flow (cf. Figure 1) is structured vertically
can be used as foundations. The complications increase when we                      by abstraction levels similar to the CAMELEON framework [2].
develop multiple user interfaces (based on different platforms)                     The goal of our process (in Figure 1 going from top to bottom) is
which offer access to the same functionality. In that case we have                  to create several user interfaces (front-ends) for the functionality
to find a way to resolve the inherent contradiction between redun-                  provided by the core of that application.
dancy (the user interfaces of one application have something in                     Further steps are illustrated by a simple time table application.
common) and variance (each user interface should be optimized                       Figure 2 shows the corresponding AUI model. The user can
for its platform and context of use).                                               search for train and bus connections by specifying several search
Model-driven approaches appear to be a promising solution to this                   criteria like departure and destination locations, time of travel or
research problem, since we can use models to capture the com-                       the preferred means of transportation (lower part of Figure 2). The
mon features of all user interfaces and model transformations to                    matching connections are retrieved by a web service operation
produce multiple variations from that. The resulting implementa-                    and displayed in a separate presentation (upper right part of
tions can be specialized (because we can embed platform-specific                    Figure 2)
implementation knowledge into the transformations) as well as                       At first, this model only contains UI elements (      ) and UI com-
consistent (as they are all derived from the same common model                      posites (      ) organized in a simple aggregation hierarchy (indi-
and hence share the same logical structure).                                        cated by          relations) and the web service operation necessary
                                                                                    to retrieve the results. This model is the starting point of our ap-
2. RELATED WORK                                                                     proach (cf. result of n in Figure 1) and captures the common es-
The mapping problem [11], a fundamental challenge in model-ba-                      sence of the multiple user interfaces of the application in one ab-
sed approaches, can occur in various forms and can be dealt with                    stract UI. This AUI contains platform-independent interaction
by various types of approaches [3]. One instance of this is the                     concepts like “Select one element from a list” or “Enter a date”.
question of how we can identify concrete interaction elements that                  The AUI is then further annotated by dialogue flow constraints
match a given abstract element and other constraints [13].                          based on the temporal relationships of the ConcurTaskTree ap-
                                                                                    proach [10]. For instance we can describe that two interaction ele-
                                                                                    ments have to be processed sequentially ( >> ) or have to be pro-
1
    Model-based engineering of multiple interfaces with transformations             cessed, but can be processed in any order ( |=| ).
                                                           4. ADAPTING ON THE AUI LEVEL
                                                           As a next step (o in Figure 1) we augment the AUI by deriving
                                                           dialogue and presentation structures. These structures are still
                                                           platform-independent. However, they can be adapted and tailored
                                                           to take into account constraints imposed, for instance, by plat-
                                                           forms with limited display size or by inexperienced users.
                                                           4.1 Clustering Interaction Elements to
                                                           Generate Presentation Units
                                                           First we cluster UI elements by identifying suitable UI composi-
                                                           tes. The subtrees starting at these nodes will become presentations
                                                           in the user interface (      ). For instance we decided that “Time
                                                           of Travel” and all UI elements below it will be presented cohe-
                                                           rently. This first automatic clustering is done by heuristics based
                                                           on metrics like the number of UI elements in each presentation or
                                                           the nesting level of grouping elements. To further optimize the
                                                           results the clustering can be refined by the human designer.

                                                           4.2 Inserting Control-Oriented Interaction
                                                           Elements
                                                           Secondly, we generate the navigation elements necessary to tra-
                                                           verse between the presentations identified in the preceding step.
                                                           For this we create triggers (      ). These are abstract interaction
                                                           elements which can start an operation (OperationTrigger) or the
                                                           transition to a different presentation (NavigationTrigger). In gra-
                                                           phical interfaces these can be represented as buttons, in other
                                                           front-ends they could also be implemented as speech commands.
                                                           To generate NavigationTriggers in a presentation p we calculate
                                                           dialogueSuccessors(p) which is the set of all presentations
                                                           which can “come next” if we observe the temporal constraints.
                                                           We can then create NavigationTriggers (and related Transitions)
                                                           so that the user can reach all presentations in dialogueSucces-
                                                           sors(p). In addition to this we have to generate Operation-
                                                           Triggers for all presentations which will trigger a web service
Figure 1. Model flow in the MANTRA approach.               operation, e.g., “Search” to retrieve matching train connections
                                                           (lower right corner of Figure 2).




     Figure 2. Adopted AUI model of the sample application, already annotated by presentations and triggers.
                       Figure 3. Simplified excerpt from the AUI metamodel and the related notation symbols.

These two adaptation steps (identification of presentations, inser-    get platform. These transformations encapsulate the knowledge of
tion of triggers) are implemented as ATL model transformations.        how the abstract interaction elements are best transformed into
These result in the AUI (blue symbols in Figure 2) augmented           platform-specific concepts. Hence, they can be reused for other
with dialogue structures (orange symbols) which determine the          applications over and over again.
paths a user can take through our application.                         As a result we get platform-specific CUI models. These artefacts
It is important to note that the dialogue structures are not fully     are still represented and handled as models, but use platform-spe-
determined by the AUI. Instead, we can adapt the AUI according         cific concepts like “HTML-Submit-Button” or “.NET Group-
to the requirements and create different variants of it (cf. results   Box”. This makes it easier to use them as a basis for the code gen-
of step o). For instance, we could get more (but smaller)              eration (q) which produces the implementations of the desired
presentations to facilitate viewing on a mobile device – or we         user interfaces in platform-typical programming or markup lan-
could decide to have large coherent presentations, taking the risk     guages.
that the user has to do lots of scrolling if restricted to a small
screen.                                                                6. APPLIED TECHNOLOGIES
                                                                       We described the metamodels used in MANTRA (including plat-
4.3 Selecting Content                                                  form-specific concepts) in UML and then converted these to
As an additional adaptation step we can filter content retrieved       Ecore, since we use the Eclipse Modeling Framework (EMF) [1]
from the web service based on priorities. For instance, if a user      to handle models and metamodels.
has a choice, higher priority is given to knowing when the train is    The various model transformations (e.g. for steps o and p) are
leaving and where it is going before discovering whether it has a      described in ATL [8]. On the one hand, the integration of ATL
restaurant. This optional information can be factored out to sepa-     with Eclipse and EMF was helpful as it supported the develop-
rate “more details” presentations.                                     ment in an integrated environment which was well-known to us.
A similar concept are substitution rules which provide alternative     On the other hand, the work with ATL model transformations
representations for reoccurring content. A train, for example,         turned out to be time consuming; for instance, ATL was sensitive
might be designated as InterCityExpress, ICE, or by a graphical        even to small mistakes and then often did not provide helpful
symbol based on the train category (e.g., ) depending on how           error messages.
much display space is available. These priorities and substitution     We use a combination of Java Emitter Templates and XSLT to
rules are domain knowledge which cannot be inferred from other         generate (q) arbitrary text-oriented or XML-based implementa-
models. The necessary information can therefore be stored as an-       tion languages (e.g., C# or XHTML with embedded PHP).
notations to the underlying data model.
                                                                       The coordination of several steps in the model flow is automated
                                                                       by mechanisms provided by the Eclipse IDE and related tools,
5. GENERATING CONCRETE AND                                             e.g., we use the software management tool Apache Ant [7]
IMPLEMENTED USER INTERFACES                                            (which is integrated in Eclipse) and custom-developed “Ant
Subsequently we transform the adapted AUI models into several          Tasks” to manage the chain of transformations and code genera-
CUIs using a specialized model transformation (p) for each tar-        tion.
                                         Figure 4. The generated front-ends (Web, GUI, mobile).

We use web services as an interface between the UIs and the app-        [2] Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q.,
lication core. Hence, the UI models reference a WSDL based                   Bouillon, L. and Vanderdonckt, J. A unifying reference
description of operations in the application core. The generated             framework for multi-target user interfaces. Interacting with
UIs then use web service operations, e.g., to retrieve results for a         Computers, 15 (3). 289-308.
query specified by the user.                                            [3] Clerckx, T., Luyten, K. and Coninx, K., The mapping
                                                                             problem back and forth: customizing dynamic models while
                                                                             preserving consistency. in TAMODIA '04, (Prague, Czech
7. CONCLUSION
                                                                             Republic, 2004), ACM Press, 33-42.
We have shown how our MANTRA approach can be used to gen-
                                                                        [4] Eisenstein, J., Vanderdonckt, J. and Puerta, A.R. Applying
erate several consistent user interfaces for a multi tier application
                                                                             model-based techniques to the development of UIs for
(cf. Figure 4).
                                                                             mobile computers. in IUI '01, 2001, 69-76.
At the moment, the automated model flow (cf. Figure 1) starts at        [5] Florins, M., Simarro, F.M., Vanderdonckt, J. and Michotte,
the AUI level. But nothing prevents us from starting with a task             B. Splitting rules for graceful degradation of user interfaces
model (e.g., in CTT) and then either manually transferring the               Intelligent User Interfaces 2006, 2006, 264-266.
task structures into an AUI model, or extending the automated           [6] Forbrig, P., Dittmar, A., Reichart, D. and Sinnig, D., From
model flow to support task models from which the annotated AUI               Models to Interactive Systems -- Tool Support and XIML. in
model can be derived.                                                        IUI/CADUI 2004 workshop "Making model-based user
We discussed how the user interface can be adapted on the AUI                interface design practical: usable and open methods and
level by tailoring dialogue and logical presentation structures              tools", (Island of Madeira, Portugal, 2004).
which take into account requirements imposed by front-end plat-         [7] Holzner, S. and Tilly, J. Ant : the definitive guide. O'Reilly,
forms or inexperienced users. For this we used the hierarchical              Sebastopol, CA, USA, 2005.
structure of interaction elements and constraints on the dialogue       [8] Jouault, F. and Kurtev, I. Transforming Models with ATL
flow which can be derived from a task model.                                 Model Transformations in Practice (Workshop at MoDELS
                                                                             2005), Montego Bay, Jamaica, 2005.
The approach generates fully working prototypes of user-inter-          [9] Paternò, F., One Model, Many Interfaces. in CADUI'02,
faces on three target platforms (GUI, dynamic website, mobile                (Valenciennes, France, 2002).
device) which can serve as front-ends to arbitrary web services.        [10] Paternò, F., Mancini, C. and Meniconi, S., ConcurTaskTrees:
                                                                             A diagrammatic notation for specifying task models. in
8. ACKNOWLEDGEMENTS                                                          Interact'97, (Sydney, 1997), Chapman and Hall, 362-369.
We would like to thank the anonymous reviewers for their con-           [11] Puerta, A.R. and Eisenstein, J., Interactively Mapping Task
structive and valuable feedback.                                             Models to Interfaces in MOBI-D. in DSV-IS 1998,
                                                                             (Abingdon, UK, 1998), 261-273.
9. REFERENCES                                                           [12] Seffah, A. and Javahery, H. Multiple user interfaces : cross-
[1] Budinsky, F., Steinberg, D., Merks, E., Ellersick, R. and                platform applications and context-aware interfaces. J.
    Grose, T.J. Eclipse modeling framework : a developer's                   Wiley, Hoboken, NJ, 2004.
    guide. Addison-Wesley, Boston, MA, USA, 2003.                       [13] Vanderdonckt, J., Advice-Giving Systems for Selecting
                                                                             Interaction Objects. in User Interfaces to Data Intensive
                                                                             Systems - UIDIS'99, (Edinburgh, Scotland, 1999), 152-157.