=Paper=
{{Paper
|id=Vol-214/paper-2
|storemode=property
|title=Open Issues for the development of 3D Multimodal Applications from an MDE perspective
|pdfUrl=https://ceur-ws.org/Vol-214/paper2.pdf
|volume=Vol-214
|dblpUrl=https://dblp.org/rec/conf/models/BoeckGCV06
}}
==Open Issues for the development of 3D Multimodal Applications from an MDE perspective==
Open Issues for the development of 3D Multimodal User
Interfaces from an MDE perspective
Joan De Boeck1, Juan Manuel Gonzalez Calleros2, Karin Coninx1, Jean Vanderdonckt2
1 2
Hasselt University, Expertise centre for Digital Media Université catholique de Louvain, School of Management
(EDM) and transnationale Universiteit Limburg (IAG), Belgian Lab. Of Computer-Human Interaction (BCHI)
Wetenschapspark 2 Place des Doyens 1
B-3590 Diepenbeek, Belgium B-1348 Louvain-la-Neuve, Belgium
{joan.deboeck, karin.coninx}@uhasselt.be {gonzalez, vanderdonckt}@isys.ucl.ac.be
ABSTRACT In order to solve the shortcomings of current model-based design
Given its current state of the art, Model-Based UI Development approaches for IVEs, we investigate the possibilities of a tool-
(MBDUI) is able to fulfill the major requirements of desktop and supported development process for virtual environment applica-
mobile applications, such as form-based user interfaces that adapt tions. To specify this development process we will first gather
to the actual context of use. More recent research deals with the some requirements, based on existing tools and processes. After-
development of 3D interactive multimodal environments. Though wards we will elaborate on two model-based approaches and
user-centered design is more and more driving the design of these compare them with respect to the identified requirements. We
environments, less attention is devoted to the development proc- investigate which requirements are fulfilled and what the prob-
esses than to interactive tools supporting isolated phases in the lems are in both processes. Finally, some open issues will be pre-
realization process. In this paper we describe our findings when sented that have been discovered during their implementation and
considering model-based development of 3D multimodal applica- evaluation.
tions in the context of model-driven engineering. We concentrate
on the requirements of such a process, the models being used and 2. REQUIREMENTS
the transformations that are able to guide or even automate part of We expect model-based development of interactive 3D environ-
the development process for the envisioned applications. We con- ments to be successful when it is conceptualized as a combination
clude with some open issues that have been discovered. of two different development approaches, namely MBUID and
the toolkit-based development of IVEs. Both approaches have
1. INTRODUCTION been examined for their benefits in order to gather the require-
Model-based development of user interfaces (MBDUI) is finding ments necessary to define our process.
its way from academic research to practical applications in indus-
An overview of model-based processes (e.g., [2,3,9,10]) shows
trial projects. While the principles of MBDUI have largely been
that they have several common properties. Nearly all processes
investigated for traditional form-based desktop UIs [9,10,14], the
start with some kind of a task model and evolve towards the final
need for flexible development of contemporary interactive appli-
user interface using an incremental approach. During each incre-
cations has raised the attention for a model-based approach. Mo-
ment, a model is converted into the next by means of an automatic
bile applications [10], (context-sensitive) multi-device user inter-
transformation (through mapping rules) or manual adaptation by
faces [2,3,10], distributed and migratable user interfaces [3,14]
the user. Although these model-based approaches have shown
are already emerging, and will gain importance with the realiza-
their value in dialog and web-based interface generation, none of
tion of pervasive computing.
them seems directly usable to design IVEs, since they all lack the
Multimodality, including speech input, voice output and pen- possibility to describe direct manipulation techniques and meta-
based interaction, is a central topic in many research projects. phors. A good MBDUI should therefore consider both the UI
However, most of the contemporary research activities in the area widgets and the description of possible interaction techniques.
of model-based UI development concentrate on 2D applications,
In contrast with the MBUID tools that have been studied, tools
in which interaction is done in two dimensions with traditional or
and toolkits to develop interactive virtual environments (e.g.,
pen-based input, even when working with 3D scenes or data.
[4,16]) do not immediately show common characteristics. This is
In order to interact with these 3D objects in a multimodal way, mainly due to the wide variety of existing toolkits, each focusing
several methods have been introduced ([1,4,7,11,15,16]) but none on a specific part of the final application such as object design,
of them is truly based on genuine models for the whole develop- scene composition and widget design. Combining these different
ment life cycle. Most are focusing directly on programming issues toolkits is not easy since the output of one tool cannot, in most
rather than on the design and analysis of the final application. cases, be used as input for another tool. Therefore it is important
This is sometimes reinforced by the fact that available tools for that several design issues can be integrated within the same proc-
3D UI design are toolkits, interface builders, or rendering engines. ess, containing code generation algorithms that can be supported
Based on our former experience with the realization of interactive by some existing toolkits. An important issue for these code gen-
virtual environments (IVEs) on the one hand, and with model- eration algorithms is that manual changes should be preserved
based development of multi-device applications on the other after regeneration. Finally, graphical tool support should be of-
hand, our current research activities concern on bridging the gap fered in order to design the high-level description models, check
between both techniques. their correctness and generate the final application.
3. PRACTICAL IMPLEMENTATIONS
In this section we will describe two model-based approaches for
the design of IVEs, called CoGenIVE and Veggie, by means of
the PIM-PSM pattern explained in [12] and depicted in Figure 1.
This pattern starts with a Computing Independent Model (CIM)
which is aimed at capturing general requirements of the future
system independently of any implementation. From this model, a
Platform Independent Model (PIM) is derived once a technologi-
cal space has been selected. This model is in turn converted into a
Platform Specific Model (PSM) by means of certain transforma-
tion rules once a particular target computing platform has been
decided. This MDA pattern can be applied multiple times at these
three levels, using the resulting PSM of the first pass as input PIM
for the second pass. Usually, the initial CIM remains constant
over time unless new requirements are introduced. In the remain-
der of this section CoGenIVE and Veggie will be compared to the
requirements that were defined in section 2.
Figure 2: CoGenIVE (a) and Veggie (b) approaches
To handle this problem, an Interaction Description Model, called
Figure 1: PIM-PSM pattern NiMMiT, has been created. The goal of NiMMiT is to describe an
interaction task in more detail. The diagrams can be created by
3.1 The CoGenIVE approach means of the CoGenIVE tool and are exported to an XML file
which is loaded at runtime. The connection of the diagrams to the
3.1.1 Process description interaction tasks in the dialog model is currently done by hand. A
CoGenIVE (Code Generation for Interactive Virtual Environ-
more detailed description of NiMMiT, together with some exam-
ments) is a tool-supported process developed at the Expertise
ples, can be found in [13].
centre for Digital Media (EDM), a research lab at Hasselt Univer-
sity. The tool has been created in order to support and evaluate a Within CoGenIVE the user can create user interface elements
model-based development process (depicted in Fig. 2a), to facili- such as dialogs, menus, and toolbars, that are then expressed in a
tate the creation of multimodal IVEs. See [6] for more details. VRIXML presentation model. VRIXML is an XML-based user
interface description language (UIDL), so that the resulting re-
The first explicit artifact in the development process is a Task
sources are loaded into the application at runtime as well.
Model, expressed in ConcurTaskTrees (CTT) [10]. This widely
VRIXML examples and a motivation for the creation of this
used notation uses a graphical syntax and offers both a hierarchi-
UIDL can be found in [5]. Like the interaction descriptions, the
cal structure and support to specify temporal relations between
user interface elements should be connected to the different appli-
tasks. Four types of tasks are supported in the CTT notation: ap-
cation states manually.
plication tasks, user tasks, interaction tasks, abstract tasks. Sibling
tasks on the same level in the hierarchy of decomposition can be Once all models have been created and connected they are used to
connected by temporal operators. automatically generate a prototype of the IVE together with the
external resource files in which the NiMMiT and VRIXML de-
Once the Task Model is created it will be converted to a Dialog
scriptions are stored. This approach offers some extra flexibility
Model, denoted as a State Transition Network (STN). The STN is
since the interaction techniques and the interface widgets) can be
based upon Enabled Task Sets (ETS), which can be automatically
altered without regenerating the code of the virtual environment.
generated from the CTT by means of the algorithm described by
Luyten et al. [9]. Each ETS consists in the tasks that can be exe- 3.1.2 Process evaluation
cuted in a specific application state. Since we are developing In- CoGenIVE covers several of the requirements that have been
teractive Virtual Environments, we will only elaborate on the found in section 2. The process starts from a task-model and in-
interaction tasks within the application. A possible example of crementally evolves towards the final user interface. The first
such a task is object selection. In a typical form-based desktop increment (towards the dialog model) can be done by an auto-
application, selecting an item is a straightforward task in which matic transformation. Afterwards the designer has to manually
the user selects an entry from a list. In an IVE however, this task connect the presentation and the interaction description model.
becomes more complex because of the wide variety of selection Preservation of manual changes in conserved only in the second
metaphors (e.g. touch selection, ray selection, go-go selection).
transformation, resulting in possible inconsistencies between action models. Also, the graphical support is only partly covered.
models that are manually adapted. However; once the code is Similarly the set of rules to go from Abstract to Concrete model is
generated from the designed models, manual adaptations are reduced. When modified manually there is no support to track
tracked and saved. This way, when regenerating the application changes, resulting in possible inconsistencies between models.
prototype, the manually inserted code is preserved. On the other hand the process covers the rest of the requirements
Preliminary evaluation of the CoGenIVE process in some IVE automatically and manually. The automatic process is supported
realizations has shown a considerable reduction of development for the transformations from task/domain model until concrete
time. Currently we are working on a new version of CoGenIVE model. Then manually, the concrete UI is edited in a high level
with improved and more integrated tool-support. editor which supports automatic code generation. The feasibility
3.2 The Veggie approach of the process has been tested through case studies [8].
3.2.1 Process description
Veggie (Virtual reality Evaluator providing Guidance based on 4. OPEN ISSUES
Guidelines for Interacting with End-users) is a project developed The challenges to have a framework to support all the above re-
at the Belgian Lab of Human Computer Interaction (BCHI), a quirements are considerable. From a technological point of view
research lab at University catholic of Louvain. A transformational it involves an integration of technologies to support the complete
method for developing 3D user interfaces of interactive informa- process. A transformation engine to support the transformational
tion systems was presented (Figure 2b) [8]. approach, high-level editors to support the design of each model,
a change tracking system (reverse engineering process) to identify
The method relies on the Cameleon reference framework [2], changes in dependent models are also beneficial in a mature
which structures the development life cycle of multi-target UIs model-based approach.
according to four layers: (i) the Final UI (FUI) is the operational
UI, i.e. any UI running on a particular computing platform either From a methodological point of view on the other hand, there are
by interpretation (e.g., through a Web browser) or by execution quite some open issues for which the solution is not straightfor-
(e.g., after the compilation of code in an interactive development ward.
environment); (ii) the Concrete UI (CUI) expresses any FUI inde- Traditional task models (such as the CTT) lack the ability to de-
pendently of any term related to a peculiar rendering engine, that scribe real 3D tasks such as selection or object manipulation. A
is independently of any markup or programming language; (iii) first glance solution is to expand the task model so as to reflect
the Abstract UI (UI) expresses any CUI independently of any 3D tasks [8] with a taxonomy of primitives. Another suggested
interaction modality (e.g., graphical, vocal, tactile); and (iv) the solution is to create another starting model, such as an interaction
Task & Concept level, which describes the various interactive description model (such as the NiMMiT notation [13]). An impor-
tasks to be carried out by the end user and the domain objects that tant question related to this issue is: ‘when should a designer
are manipulated by these tasks Models are uniformly expressed in switch from the task model to the interaction description model?’
the same UIDL, which is selected to be UsiXML (User Interface
As more and more IVEs are multi-user environments, possibly
eXtensible Markup Language – www.usixml.org [14]). Any other
supporting collaboration between users, task models and interac-
UIDL could be used equally provided that the used concepts are
tion descriptions should allow for specification of cooperative
also supported. For instance, other UIDLs in this area are
activities. Further research is needed to judge to what extent Co-
VRIMXL [5], SSIML/AR [15], and DAC [1].
operative ConcurTaskTrees and (an extended version of) NiM-
The method starts from a task model and a domain model to pro- MiT can do the job, and when other task and interaction descrip-
gressively derive a final user interface. This method consists of tions come into play. In addition, this poses the constraint of rep-
three steps (depicted in Fig. 2b): deriving one or many abstract resenting 2D vs 3D. tasks working on 2D vs. 3D objects, espe-
user interfaces from a task model and a domain model, deriving cially in augmented reality, where both could be combined on top
one or many concrete user interfaces from each abstract interface, of real world objects. [15] represents a first attempt towards this
and producing the code of the final user interfaces corresponding direction. [14] also refers to some effort in augmented reality for
to each concrete interface. To ensure the two first steps, transfor- combining 2D traditional widgets with 3D objects.
mations are encoded as graph transformations performed on the
Another question is related to the FUI. What is the appropriate
involved models expressed in their graph equivalent. In addition,
representation of 3D UIs? Should the 2D desktop metaphor still
a graph grammar gathers relevant graph transformations for ac-
be used or are there alternative visualizations or metaphors? Sev-
complishing the sub-steps involved in each step.
eral attempts go towards defining a new toolkit of 3D objects
Once a concrete user interface is resulting from these two first [1,15] which are natively appropriate to 3D applications. Again,
steps, it is converted in a development environment for 3D user this represents an advantage to have a predefined collection of
interfaces where it can be edited for fine tuning and personaliza- such 3D-widgets, but then the interaction is reduced by what they
tion. From this environment, the user interface code is automati- offer natively. Expanding already existing widgets or introducing
cally generated. By expressing the steps of the method through custom widgets remains a very challenging issue.
transformations between models, the method adheres to Model-
A final issue we would like to address in this paper concerns the
Driven Engineering paradigm where models and transformations
mapping rules. This is one of the most complex tasks in an MDE
are explicitly defined and used.
approach. A problem such as: ‘how to define spatial positions to
3.2.2 Process evaluation place 3D UI elements or objects’, is not easy to automate due to
Veggie covered just some of the requirements identified in section the lack of semantic properties that define these spatial relations.
2. In particular there still is a lack to describe the dialog and inter- Maybe the use of ontologies can be of any help to solve this issue.
5. CONCLUSION [5] Cuppens, E., Raymaekers, C., and Coninx, K. VRIXML: A
This paper introduced a series of requirements for enabling user interface description language for virtual environments.
model-driven engineering of 3D multimodal UIs, an area which is In Proc. of the ACM AVI’2004 Workshop “Developing User
recognized for being challenging MDA. Interfaces with XML: Advances on User Interface Descrip-
tion Languages” (Gallipoli, May 25, 2004), Gallipoli, 2004,
In general, model transformation holds the promise that each step pp. 111–117.
could be achieved by applying a limited set of transformations,
which are declaratively stated. It is a fundamental research prob- [6] Cuppens, E., Raymaekers, C., and Coninx, K. A Model-
lem to assess that the declarative power of such transformations at Based Design Process for Interactive Virtual Environments.
least equal the procedural capabilities of algorithms traditionally In Proceedings of 12th Int. Workshop on Design, Specifica-
used to produce a final UI. On the one hand, such algorithms tion and Verification of Interactive Systems DSVIS’05,
could be very powerful, but do not preserve properties like ob- (Newcastle upon Tyne, July 13-15, 2005). LNCS, Vol. 3941,
servability, controllability, and traceability. On the other hand, Springer-Verlag, Berlin, pp. 239-250.
algorithms could probably produce a final code which is hardly [7] Fencott, C. Towards a Design Methodology for Virtual Envi-
attainable by model transformation. Moreover, the multiplication ronments. In Proc. of Workshop on User Centered Design
of transformations in a same transformation set is complexifying and Implementation of Virtual Environments UCDIVE’99
the problems of transformation dependence, sequence, and or- (University of York, 30 September 1999).
ganization. This is a potential reason why mixed-model-based
[8] Gonzalez, J.M., Vanderdonckt, J., and Arteaga, J.M. A
approaches could be also attempted. In this case, the advantages
Method for Developing 3D User Interfaces of Information
of both paradigms could be combined without suffering from their
Systems. In Proc. of 6th Int. Conf. on Computer-Aided De-
drawbacks.
sign of User Interfaces CADUI’2006 (Bucharest, 6-8 June
2006), Springer-Verlag, Berlin, 2006, pp. 85-100..
6. ACKNOWLEDGEMENTS
[9] Luyten, K., Clerckx, T., Coninx, K., Vanderdonckt, J. Deri-
Part of the EDM research is funded by EFRO (European Fund for
vation of a Dialog Model from a Task Model by Activity
Regional Development), the Flemish government and the Flemish
Chain Extraction. In Proc. of 10th Int. Conf. on Design,
Interdisciplinary institute for Broadband Technology (IBBT). The
Specification, and Verification of Interactive Systems DSV-
VR-DeMo project (IWT 030248) is directly funded by the IWT, a
IS’2003 (Madeira, June 4-6, 2003), LNCS, Vol. 2844,
Flemish subsidy organization. We gratefully thank the support
Springer-Verlag, Berlin, 2003, pp. 203-217.
from the SIMILAR network of excellence (The European re-
search taskforce creating human-machine interfaces SIMILAR to [10] Mori, G., Paternò, F., and Santoro, C. Design and Develop-
human-human communication), supported by the 6th Framework ment of Multidevice User Interfaces through Multiple Logi-
Program of the European Commission, the Alban program sup- cal Descriptions. IEEE Transactions on Software Engineer-
ported by European Commission and the CONACYT program ing, Vol. 30, No. 8, August 2004, pp. 1-14.
supported by the Mexican government. [11] Neale, H. and Nichols, S. Designing and developing Virtual
Environments: Methods and Applications. In Visualization
7. REFERENCES and Virtual Environments Community Club VVECC‘2001,
[1] Andujar, C., Fairén, M., and Argelaguet, F. A Cost Effective Designing of Virtual Environments. 2001.
Approach for Developing Application-Control GUIs for Vir- [12] Miller, J. and Mukerji, J. MDA Guide Version 1.0.1, Docu-
tual Environments. In Proc. of the 1st IEEE Symposium of 3D ment Number: omg/2003-06-01, OMG, June 12th, 2003, ac-
User Interfaces 3DUI’2006 (Alexandria, March 25-26, cessible at http://www.omg.org/docs/omg/03-06-01.pdf
2006)..IEEE Comp. Society Press, 2006, pp. 45-52.
[13] Vanacken, D., De Boeck, J., Raymaekers, Ch., and Coninx,
[2] Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouil- K. NiMMiT: A Notation for Modeling Multimodal Interac-
lon, L. and Vanderdonckt, J. A Unifying Reference Frame- tion Techniques. In Proceedings of the Int. Conf. on Com-
work for Multi-Target User Interfaces. Interacting with puter Graphics Theory and Applications GRAPP’2006
Computers, Vol. 15, No. 3, 2003, pp. 289-308. (Setúbal, February 25-28, 2006).
[3] Clerckx, T., Luyten, K., and Coninx, K. Dynamo-AID: A [14] Vanderdonckt, J. A MDA-Compliant Environment for De-
Design Process and a Runtime Architecture for Dynamic veloping User Interfaces of Information Systems. In Proc. of
Model-based User Interface Development. In Proc. of 9th 17th Conf. on Advanced Information Systems Engineering
IFIP Working Conf. on Engineering for Human-Computer CAiSE'05 (Porto, 13-17 June 2005), LNCS, Vol. 3520,
Interaction jointly with 11th Int. Workshop on Design, Speci- Springer-Verlag, Berlin, 2005, pp. 16-31.
fication, and Verification of Interactive Systems EHCI-
DSVIS’2004 (Hamburg, July 11-13, 2004). LNCS, Vol. [15] Vitzthum, A. SSIML/AR: A Visual Language for the Ab-
3425, Springer-Verlag, Berlin, 2005, pp. 77-95. stract Specification of Augmented Reality User Interfaces. In
Proceedings of the IEEE Virtual Reality Conference
[4] Bierbaum, A., Just, C., Hartling, P., Meinert, K., Baker, A., VR’2006 (March 25-29, 2006). IEEE Computer Society
and Cruz-Neira, C. VR Juggler: A Virtual Platform for Vir- Press, Los Alamitos, 2006, pp. 135-142.
tual Reality Application Development. In Proc. of Conf. on-
Virtual Reality VR’2001 (Yokohama, 13-17 March 2001), [16] Willans, J. and Harrison, M.D. A Toolset Supported Ap-
IEEE Comp. Society Press, Los Alamitos, 2001, pp. 89-96. proach for Designing and Testing Virtual Environment Inter-
action Techniques. International Journal of Human-
Computer Studies, Vol. 55, No. 2, August 2001, pp. 45-165.