Automated planning for User Interface Composition Yoann Gabillon Mathieu Petit Gaëlle Calvary Humbert Fiorino University of Grenoble, CNRS, LIG 385, avenue de la Bibliothèque, 38400, Saint-Martin d’Hères, France {yoann.gabillon, mathieu.petit, gaelle.calvary, humbert.fiorino}@imag.fr ABSTRACT support task as well as UI variations in an integrated framework In ubiquitous computing, both the context of use and the users’ for UI composition. In the following, section 2 exemplifies multi- needs may change dynamically with users’ mobility and with the level UI composition on a medical support case study. Section 3 availability of interaction resources. In such changing elicits the degrees of freedom UI composition faces to. Section 4 environment, an interactive system must be dynamically introduces automated planning and highlights the UI composition composable according to the user need and to the current context process. Section 5 presents an integrative framework for UI of use. This article elicits the degrees of freedom User Interfaces composition by planning. The focus is set on the composition of (UI) composition faces to, and investigates automated planning to models (Model-based composer) and code (Code composer). compose UIs without relying on a predefined task model. The Section 6 summarizes our contributions and draws some composition process considers a set of ergonomic criterions, the perspectives. current context of use, and the user need as inputs of a planning problem. The user need is specified by the end-user (e.g., get 2. RUNNING CASE STUDY medical assistance). The system composes a UI in turn by Victor is a New-York citizen on vacation in Philadelphia. After assembling fragments of models along a planning process. spending his day tasting the rich local food, Victor feels bloated at night and needs to find the doctor on duty. Using his PDA, he Categories and Subject Descriptors specifies his need in general terms: “I would like to get medical H.5.2 [User Interfaces]: Ergonomics, Graphical user interfaces support”. (GUI), Prototyping, User-centered design. D2.2 [Software According to Victor’s need and to the available interaction Engineering]: Design Tools and Techniques, User-Interfaces. resources and existing information, the system abstracts the goal, plans a task model, and composes one possible UI. The General Terms composition process is not fully autonomous: it requires Design, Human factors, Algorithms. additional information from Victor. The negotiation UIs (Figure 1) are composed by the system as well. Keywords User Interfaces composition, Semantic models, Automated task Given Victor’s current location, the system asks Victor whether planning, Context of use. he prefers to return home or to find assistance in Philadelphia (Figure 1a). Victor chooses to consult a local doctor. The system 1. INTRODUCTION therefore finds and provides him with possible local contact Pushed forward by new information technologies, Weiser’s vision information: the nearest hospital or doctor on duty, a medical hot- of ubiquitous computing comes to reality [11]. His definition of line, or the firemen (Figure 1b). ambient computing implies 1) a global knowledge of an (b) Possible options. information system context, and 2) adaptation processes to (a) Possible locations. comply with a given context of use. The context of use is usually defined as a triplet. Unpredictable contexts of use might affect users’ interactive behaviors and task organization. Therefore, each User Interface (UI) design option from the task model to the final UI is highly contextual and might be decided at runtime. Therefore, most of the ubiquitous design frameworks consider variations of the context of use as inputs to select UI options (i.e., plastic design [9], automatic generation [6], Fig. 1. Automatically composed UI. mashups [1]). However, to the best of our knowledge, the user Victor selects the doctor on duty. The systems provides him with task variation is usually left out. contact and location information. The UI layout matches the This article outlines an approach, based on automated planning, to current user platform: Smartphone. If Victor prefers to keep information at hand, a UI is generated for his Smartphone. With respect to the limited screen resolution, pieces of information are tabbed and no additional data is provided (Figure 2). Copyright is held by the author/owner(s) SEMAIS'11, Feb 13 2011, Palo Alto, CA, USA information”). Variations at the CUI level are not exemplified in the case study. We could imagine a switch from a route display to a list of directions so that to fit with the Smartphone display. Such adaptations might be seen as a transformation between two graphs of models. 3.2 Graph of models to support adaptation Earlier work defined principles for UI plasticity [8]. The authors Fig. 2. The generated UIs for a Smartphone. structured the CAMELEON reference framework as a network of Desktop Wall. If a desktop wall is available, the system generates models and mappings (Figure 4), and claimed for keeping this a single pane UI allowing to contact and/or to get route graph alive at runtime so that to support adaptation. information to the doctor’s office. Additional information about close services, like the nearest all-night chemist, is also provided (Figure 3). Fig. 4. Semantic graph of models of an interactive system [8]. The graph expresses and maintains multiple perspectives on a system. For example, a UI may include a task model, a concept model, an AUI model and a CUI model linked by mappings. In turn, the UI components are mapped onto items of the Functional Core, whereas the CUI interactors are mapped onto the input and output (I/O) devices of the platform. Although such a model provides a helpful organizational view on the elements and Fig. 3. The UI generated for a desktop wall display. relationships involved when designing a plastic interactive software, the proposed mappings between the context of use and 3. MODELS ARE KEY the other components hardly describe contextual choices inside This section goes back to model based design in Human each model (TM, CUI, AUI, etc.). Computer Interaction (HCI), and claims for keeping these models Demeure et.al. provide a complementary semantic graph of at runtime so that to support dynamic adaptation. models to control UI plasticity within each design option level [4]. Their model allows UI designers to check out replaceable (i.e. 3.1 Model based design functionally equivalent) units at run-time. For example, a given UIs are modeled along several levels of abstraction. For example, layout of interactors at the CUI level might be switched to another the CAMELEON reference framework identifies four main levels one depending on the desired ergonomic properties [7]. We of design decisions [2]. The task model (TM) describes how a propose to replace these hand-made choices by predicates given user task can be carried out; the abstract UI (AUI) dependent of the context of use, and manipulated by the system. delineates task-grouping structures (i.e., workspaces); the concrete UI (CUI) selects and layouts the interaction elements Figure 5 illustrates the design process along the models and (i.e., interactors) into the workspaces; at last, the final UI (FUI) is mappings proposed in [8] and the replaceable options described in about the code. Mappings relate these models to each other. For [4]. For example, at the task level (TM), two options exist for T2 example, a task should be mapped to one workspace of the AUI at depending on the context of use (Figure 5 b&c). least. In Figure 5, within a level of abstraction, units relate to each other In a dynamic context of use, any of these UI design decisions and according to a consumer-provider relationship (Figure 5: c  p their subsequent models and mappings might be updated at link). For example, at the TM level, one of the options for the task runtime to match the current context of use. As long as these T2 relies on the occurrence of a provider leaf option1 for the task adaptations satisfy the usability and utility properties, the UI is T3 (Figure 5a). Therefore, as T2 “consumes” T3, this option will said to be plastic [9]. In Victor's case study, every design decision be triggered if and only if T3 is satisfied. Depending on the might be adapted in a plastic way. For example, the task “Find current context of use, consumer-provider links behave like nearest chemist” may be removed from the task model. The AUI model associated to the Smartphone favors the “Call the office” subtask whilst the desktop wall version gives a simultaneous 2 A leaf option has no relationship for neither providing nor access to the two subtasks (“Call the office” and “Find route reifying options. Copyright is held by the author/owner(s) SEMAIS'11, Feb 13 2011, Palo Alto, CA, USA “opened” or “closed” transistors. In a given c  p relationship, the previous case study, the sequence {“Call the doctor”→“Find the status of a transistor depends on the contextual requirements route information”} is a plan made of two actions. A Planning of the provider (p). For example, at the TM level in Figure 5, one algorithm pipes syntactic processes to perform symbolic of the task T2 options is possible only for experienced users computations. Such logical reasoning is formally described by a (Figure 5d). finite-state machine where actions are transitions between possible states of the world. Actions are defined by sets of pre/post-conditions. Pre-conditions specify the run-time dependencies of an action while post-conditions are met after executing the action. For example, Victor’s Smartphone should be connected (pre-condition) to display a location map (action). When this action is executed, the map is eventually displayed (post-condition) on the Smartphone. An updated state of the world integrates these new post-conditions, therefore enabling further actions. 4.2 Automated planning for UI composition A planning solver algorithm computes a transition graph between an initial state of the world and a final state corresponding to the system/user goal. Currently, such algorithms are mainly applied to Fig. 5. Example of a TM options graph. service composition [10]. However, as illustrated in our case In UI design, mappings link together options of different levels of study, context-dependent UI composition and automated planning abstraction. For example, interactors from the CUI level are strongly relate. Thus, we propose to address UI composition by usually mapped onto workspaces of the AUI level. These planning where: mappings, presented in Figure 4, or the definitional links in [4] constitute abstracting-reifying relationships between the options  “Actions” are “User interfaces options”. Existing components (e.g., the UI associated to the task “Call the office”) are of distinct CAMELEON levels of abstraction (Figure 6:    actions for the planner; links).  The “State of the world” is made of the current “Context of use” and the “Ergonomic properties” to be satisfied. For example, the fact “Victor owns a Smartphone” is a predicate of the state of the world;  The “selected plan” is the “composed UI”. For example, the UI displayed on the Smartphone is a concretization of the plan {“Choose the city”→“Choose the doctor” →“Contact the doctor”→{“Call the office”→“Find the route information”→ “Find the nearest pharmacy”}} computed by the planner. Even if several challenges still need to be worked out to bridge the gap between automated planning and UI composition, next section presents “Compose”, a first framework for rapidly prototyping UIs by planning. Its use by end-users belongs to the future. Fig.6. Abstracting-reifying relationships between two design options at the TM and AUI levels of abstraction. 5. THE COMPOSE FRAMEWORK For example, the TM level presented in Figure 5 might be reified Compose is a proof of concept of UI composition by planning. It into several options of an AUI level (Figure 6). In Figure 6, a task has been built on top of several functional Java-coded components option T1 is reified into a workspace layout “W3” of the AUI (Figure 7). level. Like the c  p relationship,    relationship between levels of abstraction makes sense in a given context of use only. For example, Figure 6 depicts a runtime configuration where the workspace layout W3 cannot reify the task T2 given the current context of use (Figure 6 a). The relationships we propose (    and c  p ) for modeling software can easily be explored automatically. The next section investigates automated planning. 4. UI COMPOSITION BY PLANNING This section presents the core principles of planning and shows how this approach is valuable for UI composition. Fig. 7. Functional decomposition of Compose. 4.1 Principles of automated planning The Context of use and quality in use managers translate the An automated planning algorithm derives a temporal sequence of required ergonomic criteria and the current context of use into actions into a plan to accomplish a given goal [5]. For example, in Copyright is held by the author/owner(s) SEMAIS'11, Feb 13 2011, Palo Alto, CA, USA predicates. These assertions define the current state of the world. location and the fact “The location has been set” is added to the For example, the predicate Has(“User”,“Desktop Wall”) is true state of the world. when Victor stands nearby a managed desktop wall. Figure 9 outlines the status of the c  p relationship between The User requirements manager expresses a user need as a goal the task options after Compose has explored and checked-out a to be met. For example, Victor’s need would be to “Get medical state of the world wherein Victor interacts on a desktop wall support”. display. The Model-based composer and the code composer are the core components of Compose. The model-based composer handles the planning process, whilst the code composer translates a resulting plan into a FUI. In the current prototype, planning is applied to the task level only. Once the TM level is composed, mappings are made with a generic purpose graphic toolkit called COMET [3]. COMETs are reusable context-aware widgets defined at the task level and reified along the CAMELEON reference framework. The next sections focuses on the core components of Compose. 5.1 Model-based composer The model based composer takes actions as inputs and structures them into a plan. This planning process is twofold: at first, the user task modeling is composed by collating predefined subtasks (Figure 8(p1)); next, each task (i.e.: the planner actions) is mapped onto a UI (Figure 8(p2)). These selections bring out a composed UI (i.e., the selected plan) whose properties match the current state of the world. The resulting plan is a semantic description of the UI to be composed. Fig. 9. Possible TM level planning when a desktop wall is available. Such contextualized semantic UI model highlights the appropriate task factorization in a given context of use. When a green path of provider-consumer relationship is established from the provided objective to the leafs task options, a task tree has been found to achieve the user goal. In such case, the code composer is provided Fig. 8. Compose planner instantiation. with the planned task tree. Subsequent mappings are made between tasks and COMETs to derive the final UI. In Victors’ case study, Compose waits for a user need specification (i.e. “Get medical support”). The composer tries to 5.2 Code composer find a corresponding TM level entry point. The option “Get The Code composer derives the UI code from the graph of models Medical Support” is selected. The planning algorithm then at the task level. At design time, the options of the task level have explores the semantic network of c  p relationships between been statically associated to COMETS. Therefore, in Compose, the task options of the TM level (Figure 9). For each uncovered each action of the plan is reified by a contextualized COMET. For task option, Compose checks whether it is possible or not to map example, the option “Get medical support” is mapped to a the task onto a COMET and render the UI. These mappings are COMET laying out a sequence of frames on the desktop wall. The derived according to the current state of the world. For example, Code Composer brings these pieces of UI together in a unified leaf task options like “Choose the city” or “Choose the doctor” layout. For instance, the desktop wall task tree provided by the might be mapped onto a UI as soon as Victor’s platform is model-based composed is mapped to the COMET presented in available whatever the characteristics of the platform are (in Figure 10. For example, the action “Get medical assistance” is Figure 9: t1 & t2). Other task options like “Call the office” rely on mapped to a “COMET C7” laying out a sequence of frames on the carrier capabilities at the platform level (in Figure 9: t3). desktop wall. These frames contain several sub-COMETs “Contacting the doctor” option distinguishes between several (“COMET {C3, C1, C4}”) to map the task options “Choose the screen sizes and resolutions (Figure 9: t4 & t5). When a large city”, “Choose the doctor” and “Contact the doctor”. In turn, the screen is available, such a sub-task option involves tree leaf mapping “COMET C4”, that reifies the task “Contact the doctor”, options (Figure 9: u1), while on a Smartphone display, solely two contains several vertically aligned sub-COMETs. These sub- of them are displayed (Figure 9: u2). COMETs (“COMET {C2, C5, C6}”) are mapped in the same way. Once all contextual pre-requisites of a provider option are met, the relationships to his consumers turn green and each of them might in turn be checked-out. After a provider/consumer relationship status has been specified, the state of the world is updated with the new facts the providing option concurs to establish. For example, when “Choose the city” pre-requisites are met, the composer knows for sure that Victor will be able to specify his searching Copyright is held by the author/owner(s) SEMAIS'11, Feb 13 2011, Palo Alto, CA, USA 280-291. [2] Calvary, G., Coutaz, J., Thevenin, D., Limbourg, Q., Bouillon, L., and Vanderdonckt, J. 2003. A unifying reference framework for multi-target user interfaces. Interacting with Comp. 15, 3, 289-308. [3] Demeure, A., Calvary, G., and Coninx. 2008. K. COMET(s), A Software Architecture Style and an Interactors Toolkit for Plastic User Interfaces. In 15th Int. Work. on Interactive Systems Design, Specification, and Verification. Springer- Verlag, 2008, 225-237. [4] Demeure, A., Calvary, G., Coutaz, J. and Vanderdonckt, J. 2006. The COMETS Inspector: towards run time plasticity control based on a semantic network. In Proceedings of the 5th Int. Workshop on Task Models and Diagrams for User Interface Design: TAMODIA'06, Springer LNCS 4385, Fig. 10. The “Desktop Wall” planned task tree. Each task is Haselt, Belgium, 324-339. reified by a pre-defined COMET. [5] Nau, D., Ghallab, M., and Traverso. P. 2004. Automated Planning: Theory & Practice. Morgan Kaufmann Publishers 6. CONCLUSION AND FUTURE WORK Inc. San Francisco, CA, USA. This article outlines a work in progress to support opportunistic [6] Paternò, F., Mancini, C., and Meniconi, S. 1997. user needs. A UI is composed by selecting a path in a graph of ConcurTaskTrees: A Diagrammatic Notation for Specifying models according to the current context of use and the ergonomic Task Models. Proceedings of the IFIP TC13 Interantional properties to be satisfied. UI composition is seen as a planning Conference on Human-Computer Interaction, Chapman & problem. So far, the focus has been set on the model-based Hall, Ltd., 362-369. composer whatever the time is: design time for the designer thus [7] Scapin, D.L. and Bastien, J.M.C. 1997. Ergonomic criteria providing a rapid prototyping tool, or runtime for the end-user as for evaluating the ergonomic quality of interactive systems. an intelligent assistant. Behaviour & Information Technology. Colchester, ROYAUME-UNI: Taylor & Franci 16, 4 (1997), 220-231. Future works include improvements of planners to fully support [8] Sottet, J-S., Ganneau, V., Calvary, G., Coutaz, J., Demeure, UI composition. This means (1) generating trees (i.e., tasks A., Favre, J-M. and Demumieux, 2007. R. Model-driven structures) instead of sequences, (2) defining appropriate adaptation for plastic user interfaces. In Proc. of the 11th functional and implementational software architectures for IFIP TC.13 Int. Conf. on Human-Computer Interaction : general-purpose ubiquitous computing, (3) taking non functional INTERACT'07, Springer LNCS 4662, Rio de Janeiro, Brazil, properties into account (i.e., returning the best plan instead of the 397-410. first one). Thus, beyond perspectives in HCI, this work has [9] Thevenin, D. and Coutaz, J. 1999. Plasticity of user challenged planning for ubiquitous computing. interfaces: Framework and research agenda. Human- 7. ACKNOWLEDGMENTS computer Interaction, INTERACT'99: IFIP TC. 13 , 30th This work has been mainly founded by the “Informatique, Signal, August-3rd September 1999, IOS Press, 110. Logiciel Embarqué” research cluster of the Rhône-Alpes region. It [10] Traverso, P. and Pistore, M. 2004. Automated Composition has also been supported by the french “ANR MyCitizSpace” and of Semantic Web Services into Executable Processes. the european ITEA2 UsiXML projects. Proceedings of ISWC, LNCS, 380-394. [11] Weiser, M. 1991. The computer for the 21st century. Special 8. REFERENCES Issue on Communications, Computers, and Networks 272, 3, [1] Brodt, A., Nicklas, D., Sathish, S., and Mitschang, B. 2008. 78-89. Context-aware mashups for mobile devices. In WISE 2008: Web Information Systems Engineering. Springer-Verlag, Copyright is held by the author/owner(s) SEMAIS'11, Feb 13 2011, Palo Alto, CA, USA