<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The end-user vs. adaptive user interfaces</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Direct manipulation vs. interface agents</institution>
          ,
          <addr-line>Shneiderman, B. &amp; Maes, P. Interactions, ACM, 1997, 4, 42-61</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Veit Schwartze, Frank Trollmann, Sahin Albayrak DAI - Labor Ernst Reuter Platz 7 Berlin</institution>
          ,
          <addr-line>10781</addr-line>
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In smart environments, applications can support users in their daily life by being ubiquitously available through various interaction devices. Applications deployed in such an environment, have to be able to adapt to different context of use scenarios in order to remain usable for the user. For this purpose the designer of such an application defines adaptations from her point of view. Because of situations, which are unforeseeable at design time, the user sometimes needs to adjust the designers' decisions. For instance, the capabilities and personal preferences of the user cannot be completely foreseen by the designer. The user needs a way to understand and change adaptations defined by the designer and to define new adaptations. This requires the definition of a set of context of uses and adaptations applied to the user interface in this situation. For this reason supportive user interfaces should enable the user to control and evaluate the state of the adaptive application and to understand “What happens and why?”1 In this paper, we describe the requirements and function of a supportive user interface to evaluate and control an adaptive application, deployed in a smart environment.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Context aware applications</kwd>
        <kwd>end-user support</kwd>
        <kwd>adaptationand situation definition</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>Applications, which are deployed into smart environments,
often aim to support the users in their every-day life. Such
applications must be able to adapt to different context of
use scenarios to remain useable in every situation. The
large set of possible properties of devices leads to an
infinite number of possible situations which cannot be
considered at design time completely.
For instance there is a large set of heterogenic displays for
graphical user interfaces, which differ in their aspect ratio,
resolution and input possibilities. In addition, each user has
different abilities or disabilities as well as a personal taste.
Such preferences cannot be predicted or categorized in a
reliable way at design time. The ability of the user to
distribute user interface elements to different devices also
raises the problem of multi-application scenarios.
This raises the need for the user to understand and control
adaptations of the application at runtime in order to
personalize it to her liking. Following, we want to describe
the requirements and functions of a supportive user
interface, to enable the user to evaluate and control user
interface adaptations.</p>
      <p>The next section describes the problem in more detail by an
example application. This is followed by the requirements
that have to be achieved by a supportive user interface. The
section work in progress then gives an overview about the
layout- and adaptation model, which are needed to generate
the position, size and style for each user interface element
and to change these layout dimensions to a specific
situation. The conclusion summarizes the paper and
describes the next steps.</p>
    </sec>
    <sec id="sec-2">
      <title>PROPLEM DESCRIPTION</title>
      <p>In this section we illustrate the problem space by an
example of a cooking assistant. Afterwards we derive
problems that have to be solved within the scope of
adaptive user interfaces.</p>
      <p>
        The cooking assistant is an application that enables the user
to search for recipes and supports her while cooking them.
During the cooking process the cooking assistant is able to
control the devices in the kitchen. We deployed the cooking
assistant into a real kitchen environment like depicted in
Figure 1 top-left. The main screen, shown in Figure 1,
topright, guides the user through the cooking steps and
provides help if needed. The bottom half of Figure 1
illustrates several spots corresponding to the different
working positions and user tasks in the kitchen.
In [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], we define different automatic adaptations, to adapt
the user interface to specific situations, defined by working
steps, to support the user while operating in the kitchen.
Two examples are:
•
•
      </p>
      <sec id="sec-2-1">
        <title>Distance-based adaptation: While cleaning dishes</title>
        <p>the user wants to learn more about the next step. A
video helps to understand what has to be done.
Depending on the users distance to the screen, the
layout algorithm increases the size of video
element to improve the legibility. In this case the
distance of the user to the interaction device is
used to calculate the enlargement factor for this
element.</p>
        <p>Spot-based adaptation: While using the cooking
assistant, the user is preparing ingredients,
following the cooking advices and controlling the
kitchen appliances on a working surface. Because
it is difficult to look at the screen from this
position, shown in Figure 1 bottom, the important
information (Step description and the list of
required ingredients) are highlighted.</p>
        <p>The described adaptions can improve the interaction with
the application but the user is not able to influence the
adaptations or to interfere, which can lead to frustration and
the denial of the application. For instance, if the user is
concentrated on the ingredients list or the textual step
description and the size of these elements is scaled down.
This problem space can be divided into the evaluation and
control of the system state and behavior.</p>
        <p>Incomprehensible adaptations can lead to confusions for the
user. The user has little knowledge about the state of the
system and its internal representation of the environment,
user and platform characteristics. Therefore, it is hard for
her to comprehend why a specific adaptations has been
applied. It is not only important to know why something
happens but rather how to influence the behavior of the user
interface generation. At design time unknown environment
conditions and user characteristics leads to the wish to
adjust adaptations at runtime e.g. button size to the
preference, capabilities or rule of the actual user. For
example a user with a color blindness or degeneration of the
macula2 may wish to adjust the contrast and the font size to
improve the visibility and readability of the user interface.
In a similar case, left-handed users may wish to adjust the
position of interaction elements (e.g. buttons) so their hands
don’t hide important information during interaction.
Additionally, supportive user interfaces can allow the user
to define individual distributions, which leads to free space
or multi-application scenarios. These problems must be
solved. The next section defines the requirements of an
approach to enable the user to adjust, interfere or define
new adaptations.
2 That means the loss of vision in the center of the visual
field (the macula) because of damage to the retina.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>REQUIREMENTS</title>
      <p>The requirements of a supportive application are derived
from the need to evaluate the state of the system and to
control the behavior of the adaptation algorithm. They are
divided into:
•
•</p>
      <sec id="sec-3-1">
        <title>An approach, to define the layout of an application and the adaptations to different context of use scenarios and</title>
      </sec>
      <sec id="sec-3-2">
        <title>The support of the end-user to change these</title>
        <p>adaptations to their preferences.</p>
        <p>
          As aforementioned, heterogeneous interaction devices,
sensors and appliances makes the development of user
interfaces for smart environments a challenging and
timeconsuming task. To reduce the complexity of the problem
user interface developers can utilize models and modeling
languages. User interfaces generated from models at design
time often fail to provide the required flexibility because
decisions made at design time are no longer available at
runtime. To handle this issue, the use of user interface
models at runtime has been suggested [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
        </p>
        <p>The approach shifts the focus from design to run time and
raises the need to support the end-user by the development
and personalization of applications. A meta-user interface
offers an abstract view to the state of the system and
provides an interface to influence its behavior. In [1] the
system provides access to the task and the platform model,
at which the platform model shows the interaction devices
currently available in the home. Like the described
approach, the supportive user interface should visualize the
user, environment, and platform information of the running
system in a simple way. Also the situations and
corresponding adaptations (system and user initiated)
should be transparent to the user. This means, the
adaptation rules representation must describe in detail why
and how the user interface changes and enable the user to
interfere. To make the execution of user interface
adaptations more comprehensible for the user, feedback
should be provided like the animation of user interface
changes.</p>
        <p>Additionally, the user needs a way to delete or adjust layout
adaptations rules and thus change the situation precondition
and the adaptation. A preview of the changes avoids wrong
decisions. The definition of new adaptation rules requires
the selection of context variables, their accuracy and range
of values which accurately describe the situation.
Following, the user defines the executed adaption. First she
has to select the layout dimension (size, orientation,
containment) she wishes to influence, following she selects
a specific statement and the changes realized by the layout
generation algorithm. Furthermore, some statements need
parameters e.g. a statement, defines the size of a button,
which depends on the width of the finger.</p>
        <p>The state of the realization is described in the next section.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>WORK IN PROGRESS</title>
      <p>
        In our implementation the components that realize
adaptations of user interfaces, which can be adjusted at
runtime, are the layout and the adaptation model, both
based on a model@runtime [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] approach to use the same
model at design and run time.
      </p>
      <p>
        Additionally, we have done the first steps to expand the
approach of a meta-user interface described in [
        <xref ref-type="bibr" rid="ref2">3</xref>
        ] to
provide a simple way to adapt the layout generation
algorithm to the needs of the user.
      </p>
    </sec>
    <sec id="sec-5">
      <title>Layout model</title>
      <p>
        The layout model defines the structure of the user interface
and spatial relationships between user interface elements. It
consists of the user interface structure and a set of
statements. The user interface structure is determined by a
tree-like hierarchy of Containers and UI-Elements.
Containers can contain a set of nested containers and nested
elements. User interface elements are the visible parts of the
user interface structure and can present information to the
user. The statements describe the size, style and spatial
relationships between the user interface elements.
The approach differs from previous approaches in two
general aspects. First of all, we interpret the design models,
such as the task tree, the dialog model, the abstract user
interface model and the concrete user interface model. We
derive the initial structure of the user interface and suggest
statements influencing the spatial relationships and size of
user interface elements from this information. Therefore we
propose an interactive, tool-supported process that reduces
the amount of information that needs to be specified for the
layout. The tool enables designers to comfortably define
design model interpretations by specifying statements and
subsequently applying them to all screens of the user
interface. The layout model editor is described in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] in
more detail.
      </p>
      <p>
        Furthermore, different to other layout generation
approaches like [
        <xref ref-type="bibr" rid="ref1">2</xref>
        ], we create a constraint system at
runtime. A sub tree of the user interface structure marks the
user interface elements that are currently part of the
application’s visible user interface and a set of statements
regarding these nodes is evaluated and creates a constraint
system solved by a Cassowary constraint solver. The result
of a successful layout calculation is a set of elements, each
consisting of the location (an absolute x, y coordinate) and
a width and height value.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Adaptation model</title>
      <p>The adaptation model describes possible situations and the
corresponding adaptations of the layout model of the
application. For this purpose, the adaptation model consists
of adaptation definitions. Each adaptation definition
consists of a tuple of a situation, describing when the rule
should be applied and an adaptation rule, describing how
the layout model is adapted. The adaptation rules may cause
changes to the user interface structure and may also add,
modify or delete statements.
user to define a statement which influences the size of these
elements. A screenshot is shown in Figure 3.
In the center of Figure 2 an example of an adaptation graph
is shown. Each node ( ) defines a state of the layout model
( ) and each edge ( ) a set of adaptation rules to
transform the layout model to a state, applicable for a
specific situation ( ). A situation is determined by a certain
state of the user, device and environment.</p>
      <p>Additionally, we have done first steps to define a supportive
user interface.</p>
    </sec>
    <sec id="sec-7">
      <title>Supportive user interface</title>
      <p>The supportive user interface should provide a way, to
understand the context information representation within
the system and allow the manipulation of the user interface
generation and adaptation algorithm.</p>
      <p>
        To match the requirements defined above, a supportive user
interface should hide the complexity of the interaction
space (various sensors gathering information about the
environment, heterogenic interaction devices and user
characteristics) from the user. Also the complexity of
situation definition and recognition must be encapsulated.
Accordingly, the situation description, the adaptation
definition must be as simple as possible but as complex as
necessary. The user must be able to define powerful
adaptations but shouldn’t be overstrained. A way to do this
is to derive semantic information from the user interface
models to visualize the effected elements on the screen. To
preview the user interface changes, the supportive user
interface application simulates the layout model changes
and visualizes the result of the calculation to the user.
In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] we use the information derived from the concrete
user interface model (e.g. all button elements) and allow the
The supportive user interface application adds a statement
to the layout model and triggers the recalculation
mechanism to update the user interface of the application.
      </p>
    </sec>
    <sec id="sec-8">
      <title>CONCLUSION</title>
      <p>In this paper, we have defined the requirements of a SUI to
control and evaluate the state of the adaptive application
and have shown first steps of implementation.</p>
      <p>In the future, we plan to increase the ratio of automatic
statements derived from the user interface models for the
layout generation process. Additionally, we take the domain
model objects influenced by the user interface elements into
account. The resulting set of statements reduces the amount
of designer defined statements. At run time, the situation
recognition and the adaptation algorithm must be evaluated,
especially the handling of imperfect (e.g. inaccuracy,
incompleteness, conflicting) context information and the
user interface adaptation over the time.</p>
      <p>Last but not least, we have to implement the SUI concepts
and prove the acceptance of our approach by user studies.
Additionally, because the user doesn’t want to define all
adaptions manually, we want to explore the possibilities of
machine learning algorithms to reduce and simplify the
definition of adaptations.</p>
    </sec>
    <sec id="sec-9">
      <title>REFERENCES</title>
      <p>1.Joelle Coutaz. Meta-user interfaces for ambient spaces:
Can model-driven engineering help? In Margaret H.
Burnett, Gregor Engels, Brad A. Myers and Gregg
Rothermel, editors, End-User Software Engineering,
number 07081 in Dagstuhl Seminar Proceedings.
Internationales Begegnungs und Forschungszentrum für
Informatik (IBFI), Schloss Dagstuhl, Germany, 2007.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Christof</given-names>
            <surname>Lutteroth</surname>
          </string-name>
          , Robert Strandh, and
          <string-name>
            <given-names>Gerald</given-names>
            <surname>Weber</surname>
          </string-name>
          .
          <article-title>Domain specific high-level constraints for user interface layout</article-title>
          .
          <source>Constraints</source>
          ,
          <volume>13</volume>
          (
          <issue>3</issue>
          ):
          <fpage>307</fpage>
          -
          <lpage>342</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Dirk</given-names>
            <surname>Roscher</surname>
          </string-name>
          , Marco Blumendorf, and
          <string-name>
            <given-names>Sahin</given-names>
            <surname>Albayrak</surname>
          </string-name>
          .
          <article-title>Using Meta user interfaces to control multimodal interaction in smart environments</article-title>
          . In Gerrit Meixner; Daniel Görlich;
          <string-name>
            <given-names>K.</given-names>
            <surname>Breiner; H. Huÿmann</surname>
          </string-name>
          ;
          <string-name>
            <given-names>A.</given-names>
            <surname>Pleuÿ</surname>
          </string-name>
          ; S.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Sauer; J. Van</surname>
          </string-name>
          den Bergh, editor,
          <source>Proceedings of the IUI'09 Workshop on Model Driven Development of Advanced User Interfaces</source>
          , volume
          <volume>439</volume>
          <source>of CEUR Workshop Proceedings, ISSN 1613-0073. CEUR Workshop Proceedings (Online)</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Veit</given-names>
            <surname>Schwartze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Feuerstack</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Sahin</given-names>
            <surname>Albayrak</surname>
          </string-name>
          .
          <article-title>Behavior sensitive user interfaces for smart environments</article-title>
          .
          <source>In HCII 2009 - User Modeling</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Veit</given-names>
            <surname>Schwartze</surname>
          </string-name>
          , Marco Blumendorf and
          <string-name>
            <given-names>Sahin</given-names>
            <surname>Albayrak</surname>
          </string-name>
          .
          <article-title>Adjustable context adaptations for user interfaces at runtime</article-title>
          .
          <source>In Proceedings of the Working Conference on Advanced Visual Interfaces</source>
          , pages
          <fpage>321</fpage>
          -
          <lpage>325</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Gordon</given-names>
            <surname>Blair</surname>
          </string-name>
          ,
          <source>Nelly Bencomo, and Robert B. France. Models@ run.time. Computer</source>
          ,
          <volume>42</volume>
          (
          <issue>10</issue>
          ):
          <fpage>22</fpage>
          27, Oct.
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Feuerstack</surname>
          </string-name>
          , Marco Blumendorf, Veit Schwartze, and
          <string-name>
            <given-names>Sahin</given-names>
            <surname>Albayrak</surname>
          </string-name>
          .
          <article-title>Model-based layout generation</article-title>
          .
          <source>In Paolo Bottoni and Stefano Levialdi</source>
          , editors,
          <source>Proceedings of the working conference on Advanced visual interfaces</source>
          .
          <source>ACM</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>