Using Meta User Interfaces to Control Multimodal Interaction in Smart Environments Dirk Roscher, Marco Blumendorf, Sahin Albayrak DAI-Labor, TU-Berlin Ernst-Reuter-Platz 7 10587 Berlin Firstname.Lastname@DAI-Labor.de ABSTRACT brings together multiple users, multiple interaction resources and Smart environments bring together multiple users, (interaction) multiple services (applications). This raises the need to manage resources and services. This creates complex and unpredictable and control the assignment of resources, users and services and interactive computing environments that are hard to understand. leads to the complex problem of considering the multiplicity in Users thus have difficulties to build up their mental model of such three dimensions (Figure 1). interactive systems. To address this issue users need possibilities to evaluate the state of these systems and to adapt them according to their needs. In this work we describe the requirements and functionalities for evaluating and controlling interactive spaces in smart environments from the system and the user perspective. Furthermore we present a model-based implementation of these capabilities which is accessible for the user in form of a meta user interface. Figure 1: The problem is characterized by multiple users using multiple services via multiple interaction resources, which Categories and Subject Descriptors leads to a highly complex scenario with different dimensions. Considering multiple services simultaneously (1) e.g. requires the H.5 [Information Interfaces and Presentation]: User interfaces; distribution of screen space among them, the shared usage of H.1.2 [Models and Principles]: User/Machine Systems-Human interaction resources like microphones or loudspeakers as well as factors; H.5.2 [Information Interfaces and Presentation]: User the exchange of semantics and information between the services Interfaces-graphical user interfaces, interaction styles, input to reach a useful level of interconnection. Multiple simultaneous devices and strategies, voice I/O. users (2) require e.g. the shared or alternating usage of interaction General Terms resources, the resolution of conflicts, the collaborative usage of resources and services, the possibility to exchange information Management, Design, Human Factors, between multiple users as well as the consideration of privacy issues. Finally the multiple available interaction resources (3) Keywords drive new forms of interaction, but this also requires e.g. the Meta user interfaces, human-computer interaction, smart possibility to directly select and address resources according to environments, model-based user interfaces the needs of users and services, the management of resources (occupied resources), the distribution of information across 1. INTRODUCTION multiple resources or the adaptation to the resource properties. As The ongoing realization of the ubiquitous computing paradigm different resources can also support different modalities this and the creation of environments holding multiple networked involves the utilization of multimodal interaction capabilities. In (interaction) resources lead to new forms of human-computer this paper we mainly focus on the latter aspect (3), being a interaction. While current systems support multiple applications facilitator for the former two. Without the possibility to manage through multi-tasking and multiple users one after the other or via the utilization of interaction resources, it is very unlikely that web-based applications, their interfaces are usually build for one multi-user and multi-application scenarios can benefit from the user using one service with one limited and fixed set of interaction availability of multiple interaction resources. resources. Future interaction in smart environments however In the remainder of the paper, we first introduce the user perspective by explaining the functionalities users need to manage the utilization of interaction resources. Thereby they can Permission to make digital or hard copies of all or part of this work for determine which services or parts of the services are presented on personal or classroom use is granted without fee provided that copies are or controlled through which interaction resources. Following this not made or distributed for profit or commercial advantage and that we elaborate on the system perspective by explaining how the copies bear this notice and the full citation on the first page. To copy system manages services and interaction resources and provides otherwise, or republish, to post on servers or to redistribute to lists, the functionalities to establish connections between both entities. requires prior specific permission and/or a fee. In section 4 we introduce our implementation. Based on a runtime Conference’04, Month 1–2, 2004, City, State, Country. system using a user interface model with multiple levels of Copyright 2004 ACM 1-58113-000-0/00/0004…$5.00. CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings abstraction to describe such multimodal, distributed user very interaction resource. This means the IR provides access to a interfaces for smart environments, we present how the user can meta-level of the user interface, allowing the alteration of its manage the utilization of interaction resources. A comparison to presentation. For a specific IR this includes adding or removing the related work and a summary and outlook complete the paper. parts of the service UI to/from the IR. In the second case, the user again uses an IR to access a meta-level of the user interface. In 2. THE USER PERSPECTIVE this case however, the configuration via the IR also affects other The utilization of multiple interaction resources (IRs) at the same IRs. The user can move or clone part of the service UI between time poses new demands on users. The user needs the possibility IRs or add and remove elements to an IR, different from the one to keep track of the user interfaces from the different services currently used. This second configuration requires access to the (service UIs) spread across different IRs and should be provided complete environment and the available services and provides a functionalities to alter the configuration according to her needs. freely configurable interactive space. We refer to this as the configuration of the (personal) interactive To make these functionalities available for the user, a space of the user. An ambient interactive space has been defined configuration interface is required that has to be provided as a dynamic assembly of physical entities coupled with independently from the services. Meta user interfaces (meta-UIs) computational and communicational entities to support human have been proposed to provide such common facilities for user activities [4]. According to this definition we define the (personal) interfaces and thus a generic control on a meta-level [5]. As interactive space as the set of currently used services and illustrated in Figure 2 this meta-UI provides the possibility to interaction resources as well as the connections between them for configure connections between interaction resources and services, the remainder of this paper (see also Figure 2). The interactive allowing defining which service UIs (or parts of service UIs) are space thus defines which services or parts of services the user utilized through which interaction resources. As IRs provide currently accesses and the way she accesses the different services different capabilities and support different modalities this also (through which interaction resources). requires the (multimodal) support of the different resources by service UIs. To address this issue we utilize a runtime system providing distributed multimodal user interfaces. As described in the next section, this runtime system is aware of the context-of-use and manages the service UIs. It also controls the connections between service UI parts and interaction resources. The meta-UI is provided as the control interface to configure the runtime system. 3. THE SYSTEM PERSPECTIVE From the perspective of the system, the configuration of a users’ interactive space, involves the management of the service UIs and their current states as well as the available IRs. To support the distribution of (multimodal) user interfaces, the underlying runtime system is also responsible for the assignment of IRs to service user interfaces, so that the user can interact with the service. In our work, we assume a server side system that is aware of the available services (in form of a UI model for each service) and of the available IRs (represented in a context model). Figure 2 shows the elements of this runtime system. The interaction with a service is defined by a UI model that is combined of different levels of abstraction. Similar to the CAMELEON Reference Framework [1] we distinguish task, abstract UI (AUI) and concrete UI (CUI) level, which allow the modality independent definition of the interaction and the provisioning of different concrete modality-specific representations. In our approach the UI Figure 2: Distributed interaction in smart environments via model provides a state at runtime, which describes the currently the personal interactive space. A runtime system manages the possible interaction at all times [4]. user interfaces interfaces (in form of UI models), the The runtime system also continuously senses the environment for interaction resources (in form of context model) and the new IRs and manages them in a context model. The model connections between the two. The user controls the interaction comprises information about users, environment and IRs, where via a meta-UI. representations of the IRs define the available resources internally From the user perspective the utilization of one or several services for the system. The runtime system uses the IR representations to currently available in a smart environment, thus requires the push CUI elements to these resources as we described in [3]. configuration of her interactive space to determine the IRs she Thereby the system selects the CUI element matching the wants to utilize the services. Two possibilities can be addressed to constraints of the IR. Before it pushes this element to the IR, it configure the personal interactive space: (1) the configuration of a performs the necessary adaptation steps to ensure an optimal single IR and (2) the configuration of a set of multiple IRs. In the presentation. Based on these functionalities the system can first case, the user uses a given IR to control the utilization of this provide capabilities to establish connections between the service choose which one she wants to connect to the currently used UI elements defined on the task level and the interaction resources screen. Once the user selects a service the UI of the selected making these elements accessible for users. service is shown in the centre and the configuration options at the In the simplest case, this leads to UI elements connected to a bottom of the screenshot can be used to configure the current single IR, e.g. the presentation of a user interface on a screen. The service UI. Here we distinguish four features the user can utilize ability of the system to maintain and alter this connection and the to configure her interactive space. (1) The migration feature possibility to push the UI elements to any connected IR now also provides possibilities to migrate a service UI from one interaction allows changing the target IR, leading to the migration of the UI resource to another to e.g. transfer the UI to another screen better e.g. to another screen. Redirecting the elements to an IR of viewable from the users’ current position. Through the another modality could e.g. also lead to starting a voice dialog. distribution feature (2) the user can distribute parts of the user However, to realize multimodal interaction we aim at the interface to other IRs. Thereby the user can also specify if the simultaneous utilization of multiple interaction resources. This in selected parts should be cloned or moved to the target IR. The turn requires the distribution of the available UI elements to third configuration feature is called multimodality (3) and multiple IRs simultaneously (see also [12, 7]). Multimodal provides possibilities to configure the utilized modalities within interaction can be created, if these IRs support different the interaction. This allows users to e.g. switch off audio output of modalities. Redundancy in the interaction can be created by the MASP if it is currently disturbing the user. The adaptation connecting the same UI element to multiple IRs. feature (4) allows the user to configure further functions of the MASP. E.g. the MASP supports a so called “FollowMe” modus The different configuration scenarios described above can which can be configured through the adaptation feature. The technically be brought down to the atomic operations of creating a activation of the “FollowMe” modus leads to an automatic connection between a CUI element and an IR or removing such configuration of the interactive space by the MASP over time. connections. For example changing the interaction modality of a The MASP senses for changes in the available interaction task from graphical to vocal includes the removal of connections resources for the user (resources made available or are no more between IRs and graphical CUI elements of that task and the available to the user) and reconfigure the interactive space creation of new connection between the voice CUI elements and according to the new resource combination by trying to support a the appropriate IR (or IRs). When a "CUI to IR" connection is broad range of interaction possibilities. established our runtime system sends the element to the IR, which then creates the final user interface and delivers it to the user. If a CUI element should no longer be accessible through an IR, the appropriate connection between both is destroyed, which results in the removal of the corresponding FUI from the IR. It must be said, that the association between the CUI elements and the elements at higher levels of abstraction (task and AUI) are always preserved. This is necessary for the state synchronization of all elements as described in [2]. For example, if a task becomes no longer available to the user, the associations assure that all connections between the CUI elements belonging to the task and the interaction resources are removed. As the result the user cannot access the user interface of the task and has no possibility to perform it. In the next section, we describe our implementation of the described system. The Multi-Access Service Platform (MASP), a modal based runtime system, provides the basis to provide users a meta-UI allowing them to control multimodal interaction. Figure 3: Screenshot of our implementation of the meta-UI. 4. THE MASP & THE META-UI These configuration options allow adapting the interactive space To evaluate the described approach for the control of multimodal according to the possible changes defined e.g. by Coutaz [6]. The interaction we have implemented a first version of a meta-UI with user can redistribute the UI elements to different interaction the MASP, our implementation of the UI runtime system resources (at the interactor level) by moving or cloning elements, described above. Providing a model-based framework for the migrate parts or the complete user interface to another IR and can development and execution of multimodal multi-device user also remould the existing user interface on one IR by adding or interfaces, the MASP provides the means to develop interactive removing UI elements. services for smart environments. Combined with the capability of Moreover the status symbols in the upper centre of the screen the MASP to automatically discover interaction resources in the allow the user to observe which modalities are currently enabled. environment the prerequisites are fulfilled to implement a meta-UI In the bottom right corner the user can “release” her interactive service allowing the user to evaluate and control multimodal space, which results in removing all service UIs from all interaction. interaction resources. Figure 3 shows a screenshot of the current implementation of the meta-UI. In the upper, left corner the user can requests the 5. STATE OF THE ART currently available services. In the upcoming list the user can Several approaches exists which enable the configuration of the let the user specify the relevant situation parts if she wants the relationship between services and IRs by some means or other. system to be able to restore a given interactive space. Another Most of them also provide some kind of meta-UI to allow the user direction for future work are the problems occurring when to access the configuration possibilities Molina et al. [9] describe considering multiple users and multiple services. a system for the rapid prototyping of user interfaces distributed over several graphical IRs. They shortly mention a meta-UI 7. REFERENCES allowing to distributed user interface elements to other graphical [1] Balme, L., Demeure, A., Barralon, N., Coutaz, J., Calvary, G. IRs. In [8] a similar approach is presented with a focus on a Cameleon-rt: A software architecture reference model for development framework to design user interfaces distributed over distributed, migratable, and plastic user interfaces. In EUSAI, several graphical IRs. The system supports the attachment and 2004. detachment of user interface element from/to graphical IRs. An approach which supports the migration of complete user interfaces [2] Blumendorf, M., Feuerstack, F., Albayrak, S. Event-based is presented in [10]. However, none of the solutions we are aware synchronization of model-based multimodal user interfaces. of support the configuration as flexible and broad as described in In Proceedings of the MoDELS'06 Workshop on Model this paper: the distribution of user interface elements to arbitrary Driven Development of Advanced User Interfaces interaction resource(s) to allow free configurable multimodal (MDDAUI), 2006. interaction. [3] Blumendorf, M., Feuerstack, F., Albayrak, S. Multimodal In the mentioned approaches the meta-UIs are not the focus but user interaction in smart environments: Delivering are developed to give access to exactly the specific described distributed user interfaces. European Conference on Ambient configuration possibilities. They do not consider other Intelligence: Workshop proceedings, 2007. functionalities which could improve the possibilities of users to [4] Blumendorf, M., Lehmann, G., Feuerstack, F., Albayrak, S. simplify the control of their interactive space. The work described Executable models for human-computer interaction. 15th by Vanderhulst [11] is very interesting as it focuses on the meta- International Workshop on the Design, Verification and UI and not the system side to “put the user in control”. However Specification of Interactive System, 2008. the approach focuses on the handling of services by e.g. start/stop or suspend/resume them. The issue of how to utilize a service is [5] Coutaz, J., Meta-user interfaces for ambient spaces. In only considered aside. Thus the work should be a good addition to TAMODIA '06: Proceedings of the 5th international the one described here. workshop on Task models and diagrams, 2006. [6] Coutaz, J., Calvary, G. HCI and Software Engineering: 6. CONCLUSION Designing for User Interface Plasticity. In The Human- We presented our approach to control multimodal interaction in Computer Interaction Handbook: Fundamentals, Evolving smart environments. The functionalities to keep the user in control Technologies, and Emerging Applications, 2008 of the interaction by configuring her interactive space as well as [7] Demeure, A., Calvary, G., Sottet, J.-S., Vanderdonkt, J., A the prerequisites from the system perspective were described. reference model for distributed user interfaces. In TAMODIA Furthermore a first implementation of the described concept '05: Proceedings of the 4th international workshop on Task allowing the user to access these capabilities through a meta-UI models and diagrams, 2005. was presented. However, there are still some aspects which [8] Grolaux, D., Vanderdonckt, J., Van Roy, P. Attach me, deserve further investigation. detach me, assemble me like you work. INTERACT’2005: At the moment the user has to configure the interactive space Int. Conf. on Human-Computer Interaction, 2005. based on the provided information by herself. But the system with [9] Vanderdonckt, J., González, P., Fernández-Caballero, A., its knowledge about the available IRs and services as well as the Lozano, M.D., Molina, J.P., Rapid prototying of distributed user and further environment information can at least help the user interfaces. In Proceedings of 6th Int, Conf. on user by providing useful configuration possibilities. Furthermore Computer-Aided Design of User Interfaces (CADUI'2006), the system can automatically configure the interactive space of the 2006. user (as we started to implement with the “FollowMe”-feature). However, the automatic configuration can also reduce the [10] Paternò, F., Santoro, C., Scorcia, A., Bandelloni, R., Mori, satisfaction of the user if it does not exactly match her preferences G., Web user interface migration through different modalities and requirements and should therefore be used careful. with dynamic device discovery. In AEWSE'07, 2nd International Workshop on Adaptation and Evolution in Web Another aspect that arises with the configurability of the Systems Engineering, 2007. interactive space is the persistence of the user configuration. [11] Vanderhulst, G., Luyten, K., Coninx, K., Put the user in When the user (re)configures its interactive space, she adapts it to control: Ontology-driven meta-level interaction for pervasive her preferences and needs in the current situation. It thus appears environments. First International Workshop on Ontologies to be suitable that the system utilizes this knowledge by providing in Interactive Systems (Ontoract 2008), 2008. the same configuration to the user in the same situation, so the user does not need to do the same configuration over and over [12] Vandervelpen, C., Coninx, K., Towards model-based design again. However, the automatic analyzing of the current situation support for distributed user interfaces. In NordiCHI '04: and the detection of the relevant context parameters is a difficult Proceedings of the third Nordic conference on Human- task which needs further investigation. A first solution could be to computer interaction, 2004.