Useware modeling for ambient intelligent production environments Daniel Görlich Kai Breiner University of Kaiserslautern University of Kaiserslautern Center for Human-Machine-Interaction Software Engineering Research Group 67663 Kaiserslautern 67663 Kaiserslautern goerlich@mv.uni-kl.de breiner@informatik.uni-kl.de ABSTRACT events. In production environments, it is nowadays common to The impact of user interface quality has grown in software train employees in the operation of devices and restrict their systems engineering, and will grow further with upcoming new access to safety-critical device functions, so all users (operators) paradigms such as Ambient Intelligence or Ubiquitous are taught how to handle a device in advance. In a private Computing, which confront the production industry with a huge environment, e.g., at home, however, users often have nothing diversity of new usage situations. In this paper, we will show the more than a manual describing the functionality of a certain adaptation of a task-oriented useware modeling language, which device – and they often refuse to read it right away. is employed in the model-based useware development process, to Obviously, a developer of consumer goods must design the future paradigms by extending its existing models with respect to appliance in an intuitive way for a large variety of users. Nearly the new upcoming requirements. This language reflects several all kinds of home appliances are already equipped with computing user groups’ tasks and user interface structure preferences in a power or are being replaced with equivalent – or even higher common use model described in a system-independent language. valued – technical devices. Hundreds of so-called “smart homes” It is being enhanced to describe spatial relations, connections and “living assistance scenarios” all around the world, centers of between device compounds, and different ways of fulfilling tasks excellence regarding the technologically modern way of life, within different interaction zones. For the future, this model is demonstrate networked, adaptable, “smart” devices that can be intended to be used for the run-time generation of user interfaces personalized or even actively and automatically adapt themselves for adaptive software and intelligent environments, especially in to their users, resulting in environments proactively supporting the area of production and manufacturing. the users during their daily lives, or are even able of detecting and thereby preventing critical situations [6]. With the Categories and Subject Descriptors SmartFactoryKL in Kaiserslautern/Germany, a first intelligent D.5.2 [User Interfaces]: Theory and methods, User-centered factory has been built to provide a testbed and demonstration design platform for smart technologies based on ad-hoc networks, dynamic system collaboration, and context-adaptive human- machine interaction systems. In the future, these systems will General Terms provide information at any time and in any place, making them Performance, Design, Reliability, Experimentation, Human more flexible and remotely accessible. This, however, results in a Factors, Languages. huge number of usage situations dependent on user, situation, machine, environmental conditions, task, etc. A smart device’s Keywords flexibility may thus become a disadvantage when information is Useware, User Interfaces, Ambient Intelligence, Intelligent not presented properly in terms of format, structure, and a Production Environments. context- and location-sensitive, task-oriented way [1]. 1. INTRODUCTION The level of acceptance of a user interface depends largely on its ease and convenience of use. A user can work with a technical device more efficiently when the user interface is tailored to the users’ needs, on the one hand, and to their abilities on the other hand. Therefore, during a systematic development process, the users’ needs, preferences, tasks, and mental models have to be Figure 1. Systematic useware development process [4]. surveyed, in order to subsequently deploy them into the Within a systematic useware development process (see Figure 1), development of a task-oriented and user-friendly device that will the Center for Human-Machine-Interaction (ZMMI) has applied be as convenient as possible to use. its Useware Markup Language (useML, see Figure 2) Still, people think and act quite differently, even when they harmonizing multiple users’ mental task models in a common use perform the same task. Their personal requirements may depend model, in numerous successful joint ventures and industrial on a large variety of influences ranging from their qualification, projects. With the launch of the SmartFactoryKL [8], it carries the their area of activity and their tasks, up to rapidly changing future interaction paradigm of Ambient Intelligence into the area conditions such as mood, time of day, current location, or recent of production industry. This paper focuses on the description of the enhancement of the Useware Markup Language to meet the of different software products by increasing their compatibility. mentioned challenges in future production environments, and This tightrope walk between automatic self-adaptation to the presents the current state-of-the-research project GaBi (German individual user on the one hand, and the standardization of user abbreviation for “Generating task-oriented user interfaces in interfaces on the other hand requires a well-adjusted combination intelligent production environments”), which aims at developing of model-based user interface generation and previously defined possible solutions for human-machine interfaces in production user interface components, or the use of so-called usability facilities ten years from now – in the year 2017. patterns. Another common mistake pointed out by the experts concerned 2. PROJECT STATUS the design of easily and intuitively usable interfaces. While The research project GaBi aims at adapting and enhancing the simplification by reduction of complexity is an honorable goal, already existing and approved Useware Markup Language, as the systems must rather present interfaces corresponding to the well as at establishing a repository of usability patterns (tailored task, qualification, preferences, and needs of the individual user. to the production industry), which will be used during the model- In this context, reducing complexity is not always the best way to based generation of user interfaces at run-time. The project is optimize a user interface, because highly qualified users often scheduled for 2 (+1) years and is entering its second year. need or simply want to access extra information and functionalities. From this point of view, it becomes evident that users make different, but always high demands on technical devices. They tend to interact with the same device in different ways, so that developers are advised to consider different types of users throughout the whole development process. 3. MODELING WITH useML Originally invented by [5] to structure user interfaces in a user- and task-oriented useware development process (see Figure 1), the XML-based Useware Markup Language (useML) arranges user or machine operator tasks in a hierarchy of abstract use objects (UO) and five types of different elementary use objects (EUO), which are well-suited for today’s machine operations. The overall model will be arranged as a tree, using UOs as nodes and Figure 2. Classic useML scheme according to EUOs at the leaf level. Starting from a high-level task description [4] and [5]. at the root node, the UOs are refined from high-level abstract The early analysis of user requirements for interactions with tasks into more concrete subtasks, activities, actions, and, finally, intelligent environments was complemented with a so-called elementary actions or operations such as pressing a button, Future Workshop, which took place on February 13th, 2007. It entering a value, or reading displayed information from a screen was attended by participants from different manufacturing (see Figure 2), which can be directly mapped to the corresponding companies and research institutes, including requirements functionality of a certain device. This task model is platform- and analysts, data protection officers, philosophers, software modality-independent and self-sufficient in terms of concrete engineers, jurisprudents, and usability experts. All these design and realization, which are added during a later phase of the specialists gave brief overviews of the current state-of-the-art in useware development process. Based on the multitude of task human-machine interaction from their own professional points of models determined from (potential) users in the analysis phase, view, and recapped deficiencies of today’s systems. After the use model is integrated from these models through collecting visions for the year 2017 in a second phase of the harmonization and systematic structuring. The use model is workshop, these ideas were finally evaluated against the identified designed to incorporate several user groups’ different approaches deficiencies with respect to the feasibility of their to their specific tasks in a single model, which can be filtered by implementation. The results were incorporated into a scenario attributes such as user group and device type. It is also possible to describing natural human-machine interaction in a production build one single use model for a whole family of devices, i.e., a facility in the year 2017 and into the extension of the use model company’s product line. This way, all devices developed on the [7]. basis of the same use model will share a consistent, recognizable, and thus intuitive interaction scheme, which might also cross One fact that emerged, among others, was that there will be no platforms: Only in the subsequent design and realization phases one-fits-all solution meeting all kinds of tasks and personal following the use model structuring (see Figure 2), the actual preferences. The experts instead pleaded for more flexible target platforms and interaction modalities are derived, which systems, which adapt themselves to each user’s needs and the might be Graphical User Interfaces (GUI) or speech interfaces. current context of use automatically and further provide the possibility of being adapted manually by the user according to his Although the Useware Markup Language is well suited for the personal preferences. Still, human-machine interfaces should not development of single devices or device families, it was not be too flexible, in order to still meet safety specifications and designed to describe more complex production processes or even offer rarely needed, but important functions, for example. facilities incorporating a high number of devices or machines of Therefore, some basic standards are needed to increase the different types. Therefore, the GaBi project aims at improving the recognition value of a system to the user, and to facilitate the use Useware Markup Language by expanding its scope, providing compatibility for future interaction paradigms such as Ambient Furthermore, each device (compound) can be operated differently Intelligence or Ubiquitous Computing. Such progressive depending on the interaction zone that its user or operator is in. environments will comprise hundreds or even thousands of For example, a remote control panel for a robot picker arm might cooperating devices and embedded systems with which we will be configured to control the robot only within an effective range quite naturally interact. Traditional interaction paradigms such as of a few meters, while it can request status information from a GUIs dedicated to a single device may not be sufficient any wider distance or even remotely via intranet. Such interaction longer, and users may employ numerous devices at the same time zones can be defined for each device or device compound, but to fulfill their tasks. An appropriate use model therefore must always belong to at least one of three abstract zones, i.e., the local contain a spatial representation of the relevant environments or (at or near the device), regional (within a sealed-off data spaces, as well as a description of devices and device compounds network), or global zone. The local zone is further subdivided into involved in all potential users’ works. an interaction zone, where the user can operate the device, a notification zone, where he can still gather information presented Within the GaBi project, we therefore adapted the use model by the device (i.e., a display or loudspeaker), and finally an scheme to these requirements (compare Figures 2 and 3). It now attention zone, where he cannot yet gather detailed information, includes relations between locations, devices, and users, but may notice warning signs, blinking lights, unexpected beginning with a hierarchical structure of (mobile or stationary) messages, color codes, and so on. In certain cases, these zones can organizational rooms. The meets relation is used to model be identical, e.g., when a user possesses a remote control that lets adjoining rooms. Using the joint relation, rooms can be structured him operate a device from a distance that exceeds his physical into physical or logical subspaces. Completely different rooms are limitations. Under normal circumstances, however, the zones expressed by the disjoint relation. All rooms are identified by would overlap as shown in Figure 3. names, but can also have unique IDs, coordinates, or descriptions. As just mentioned, the rooms do not have to exist as physical For each interaction zone, every device in any room should rooms in the real world, but can also identify purely logical (i.e., possess at least one use model, and preferably even exactly one. organization) rooms. This, however, is not the classic use model anymore as invented by [5] and described above (see Figure 2); it has been extended to not only span hierarchies of UOs and EUOs. Rather, sequences of use objects can be defined, and elementary use objects can be combined into compounds (EUOCs) with elaborated selection and execution rules. Further, any UO can be linked to other ones, even in other subtrees of the hierarchy, thereby spanning a network of associated use objects within the classical hierarchy (see Figure 3). Within an EUOC, execution rules can define how many of the given EUOs can or must be executed in which order. For example, it can be stated that at least 3 of all 5 components in a compound must be executed sequentially. Finally, conditional references between EUOCs and their parent UOs can be enclosed, such as a break or post condition. 4. CREATING THE UI Based on this integrated use model, by applying platform-specific (stylesheet) transformations, it is possible to build the corresponding UI. Thus, only one use model is needed to describe the human-machine interaction independently of the device that will be used to communicate with the user. An important property here is the fact that the UI can be created at development time, can then be deployed at the destination platform and used there. Due to the highly dynamical environment, new interaction devices can be integrated seamlessly at any time. Thus, the use Figure 3. Integrated, room-based use model. model needs to be reinterpreted accordingly. Therefore, it is Within every room, multiple (mobile or stationary) device important to alter the appearance of the UI at run-time, integrating compounds can be located. Again, each device compound can new functionality to reflect the current configuration of the whole recursively comprise other device compounds, devices, production environment. Unlike the previous, single-device components, or parts. If a device is subordinated to or is a child approach, it is self-evident that while the usage situation is no element of another device, respectively, it is considered to be a longer static at run-time, the UI code has to be generated as well part of that device. If a mobile device is subordinate to another as deployed and executed at run-time. In a previous approach, device, it is considered to work as long as it is an integral part of which generated a model-based user interface at run-time [2], we its parent device. Mobile devices can also be direct children of observed that the time consumption of the entire process is very organizational rooms; in this case, they can only be used within high. This means that performing all activities necessary for these rooms to fulfill the tasks modeled later on. providing a complete user interface built from an abstract description (task model and usage situation) takes far too long to that address the integration of actual context information into the be really usable. transformation process of the model. This raised the idea of every device to be integrated into this Another important issue is the implementation of this adaptive environment already having to provide a set of user interface system. Due to the awareness that is not effective to create an components, with each being well developed for a certain pre- entire UI from scratch at run-time, we already mentioned the idea defined platform. At run-time, the corresponding user interface of composing the UI from single components, which need to be components have to be transferred to the interaction device and provided in the first place (e.g., by the devices themselves). The there need to be combined into an integrated UI acting as a composition of the single components is also a topic of interest, as universal controller. When a new device appears in this is the way the composition will be influenced by usability environment, only the new UI component needs to be deployed at patterns. These design guidelines already exist, but are neither the interaction device. formalized in a machine readable way, nor have explicit patterns been identified for production environments. In useML, such UI components represent the implementation of EBOCs or even entire UOs, depending on the current granularity. Finally, an important step in our evaluation process will be a Originally, an EBOC consists of a description of how the human- feasibility study implementing and testing our concept. For this machine interaction has to be performed and which steps can be purpose, the SmartFactoryKL is the ideal testbed for simulating mapped directly to the device’s functionality. Since each UI future production environments. component is a complete encapsulated interaction unit, there is no longer a need for explicit modeling. Therefore, the enhanced useML will accept also components instead of EBOCs and UOs. 7. ACKNOWLEDGMENTS This work was supported in parts by the GaBi project at the University of Kaiserslautern, which is funded by the German Research Foundation (DFG). 8. REFERENCES [1] Bödcher, A., Mukasa K. and Zühlke D. Capturing Common and Variable Design Aspects for Ubiquitous Computing with Figure 4. Effect of usability patterns. MB-UID. In Proceedings of the Workshop on Model Driven Another important aspect concerns the so-called usability patterns Development of Advanced User Interfaces. Montego Bay, and usability guidelines [3], which capture “best practice” Jamaica, 2005. knowledge in Usability Engineering. There are different types of [2] Trapp, M., and Schmettow, M. Consistency in Use through these patterns, e.g., some describing the interaction of humans and Model-based User Interface Development, CHI 2006, machines, others including layout descriptions. Figure 4 shows Workshop on The Multiple Faces of Consistency, Montreal, the effect of the alternating-row-color and the table-header pattern Canada, 2006. on the example of a regular table. The enhanced useML allows for potentially applicable patterns being annotated to entire sub- [3] Welie, M. v., Veer, G. C. v. d. and Eliëns, A. Patterns as trees, fulfilling certain pre-conditions. Tools for User Interface Design. In International Workshop on Tools for Working with Guidelines, (Biarritz, France, 2000), 313-324. 5. SUMMARY After our requirements research comprising the Future Workshop [4] Zuehlke, D. Useware-Engineering für technische Systeme. with a heterogeneous set of participants, the fact emerged that Springer, Berlin, 2004. user interfaces in future intelligent production environments have [5] Reuther, A. useML – Systematische Entwicklung von to be task-oriented in order to achieve a reduction of complexity Maschinenbediensystemen mit XML. Ph.D. Thesis, compared to the human-machine devices that are currently used in University of Kaiserslautern, 2003. such factories. Therefore, we extended the approved useML according to the new paradigm of Ambient Intelligence in future [6] Nehmer, J., Becker, M., Karshmer, A., and Lamm, R. Living production environments. Hence, we included the possibility of assistance systems: an ambient intelligence approach. In structuring the spatial environment (spatial use model), which is Proceeding of the 28th international Conference on Software essential to these context information sensitive systems. Also, Engineering (Shanghai, China, May 20 - 28, 2006). ICSE another important factor is the configuration model of interacting '06. ACM Press, New York, NY, 43-50. devices, which was also included in the altered model. Now every [7] Görlich, D., and Breiner, K. Intelligent task-oriented user device can be equipped with a separate use model, describing its interfaces in production environments. In 1st International own way of interaction. Workshop on Model-Driven User-Centric Design & Engineering (Seoul, Korea, September 2007). IFAC, 2007. 6. FUTURE WORK [8] Pohlmann, E. G., Bödcher, A., and Zühlke, D. Nevertheless, many problems remain regarding the introduction SmartFactoryKL – Informationstechnik für die Fabrik der of the Ambient Intelligence paradigm into intelligent production Zukunft. In atp – Automatisierungstechnische Praxis 47(12), environments. For example, the fact that the device configuration 2005, S. 48-52. will change at run-time, according to the factory configuration, needs to be reflected. Therefore, methods have to be developed