<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Multimodal User Interface Model For Runtime Distribution</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dirk Roscher</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Blumendorf</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sahin Albayrak DAI-Labor</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>TU-Berlin Ernst-Reuter-Platz</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Berlin</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Germany</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dirk.Roscher</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco.Blumendorf</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sahin.Albayrak}@DAI-Labor.de</string-name>
        </contrib>
      </contrib-group>
      <fpage>5</fpage>
      <lpage>8</lpage>
      <abstract>
        <p>Smart environments provide numerous networked interaction resources (IRs) allowing users to interact with services in many different situations via many different (combinations of) IRs. In such environments it is necessary to adapt the user interface dynamically at runtime to each new situation to allow an ongoing interaction in changing contexts. Our approach allows to dynamically select the combination of IRs that are suitable for the interaction in the current context at any time. The decision is based on information from a user interface model executed at runtime and a context model gathering information about the environment. The user interface model supports the CARE properties to specify flexible multimodal interaction.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        In this work, we first present the requirements to
dynamically adjust the used IRs at runtime (section 3).
Afterwards, our model-based approach targeting the
requirements is described. A UI model reflects the various
UI elements as well as the relations between them in terms
of CARE properties [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] to achieve multimodal interaction
(section 4). At runtime, the modeled UI description in
combination with information about the available IRs from
an additional context model is used to continuously adjust
PPree-rpmroicseseiodningtosomftahkee5tdhigInitaerlnoatriohnaarldWcoorpkisehsopoofnaMll oodrelpDartivoefn Dtheivsewlooprmkenfotr
opfeArdsovannacleodrUcslearsIsnrtoeorfmaceusse(MisDDgrAanUtIe2d0w10i)t:hBouritdgfeinegpbreotwviedeendUtsheartEcxoppeireiesncaere
anndotUmIEadnegioneredriinsgtr,iobrugtaendizfeodr aptrtohfeit2o8trhcAoCmMmeCrocniafelraedncveanotnaHgeumanadn Fthaacttocrsopinies
Cboemaprutthinisg nSoytsitceemsan(CdHthIe20f1u0l)l,cAittalatniotan, Goneotrhgeia,fiUrsStAp,aAgper.ilT1o0,c2o0p1y0.otherwise,
Copyright © 2010 for the indisveidrvuealrspaoprertsobyretdheisptraipbeurtse' atuothloirsst.sC,orepqyiunigres prior
or republish, to post on
pseprmecititfeidc fpoerrpmriivsastieonanadnadc/aodreamficeep.urposes. Re-publication of material from this
vColHumIe20re0q9u,irAespprielrm4–is9s,io2n0b0y9,thBeocsotpoynr,igMhtaoswsancehrsu.sTethtiss, vUoSluAm.e is published by
itCsoedpiytorrigs.ht 2009 ACM 978-1-60558-246-7/09/04...$5.00.
the UI distribution according to the current situation
(section 5). Before we describe our approach, related work
is presented in the next section.
      </p>
      <p>
        RELATED WORK
Model-based development is a promising approach in the
context of multimodal UIs. According to the classification
of the Cameleon Reference Framework proposed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] UI
models feature four levels of abstraction: Concepts and
Task Model, Abstract, Concrete and Final User Interface.
Based on this general framework, several User Interface
Description Languages (UIDLs) have been designed. The
most relevant with respect to the goals of this work are
UsiXML [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and TERESA [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Their goal is to develop
multimodal UIs but they only support a fixed set of IRs,
whereas we aim to support sets of IRs changing at runtime.
The distribution of UIs has also been a topic of various
research activities, ranging from the characterization of
distributed UIs [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] to development support for specifying
distributed UIs [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The approach of Elting and
Hellenschmidt [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] supports simple conflict resolution
strategies when distributing output across graphical UIs,
speech syntheses and virtual characters. The main goal is
the semantic processing of input and output in distributed
systems. The dynamic redistribution and definition of
dynamic UI models has thus not been the focus of the
approach. The I-AM (Interaction Abstract Machine) system
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] presents a software infrastructure for distributed
migratable UIs. It provides a middleware for the dynamic
coupling of IRs to form a unified interactive space. The
approach supports dynamic distribution across multiple
heterogeneous platforms but does not support the arbitrary
recombination of IRs and is limited to graphical output as
well as mouse and keyboard input.
      </p>
      <p>Our approach utilizes the modeled design information at
runtime to dynamically adjust the combination of the used
IRs. In the following we first describe the requirements that
need to be fulfilled to allow the distribution of multimodal
UIs at runtime. Afterwards, we show how these
requirements are implemented by our approach.</p>
      <p>DYNAMIC UI DISTRIBUTION
To support UI (re-) distribution at runtime several
requirements needs to be fulfilled, that are derivated from
the abstract architecture depicted in Figure 1:
1. A user interface description is needed that supports
different variants of multimodal interaction and benefits
from the advantages of specific modalities and modality
combinations. Information about the supported modality
combinations need to be available at runtime.
2. To combine IRs from different platforms, the user
interface description needs to address single IRs.
3. Environment information must be gathered and kept up
to date (e.g. IRs, users and their positions).
4. An instance that incorporates information about the UI
and the environment is required which determines the most
appropriate combination of IRs at any time.
5. The combination of arbitrary IRs from different
platforms also requires a mechanism that allows to control
IRs independently from each other.</p>
      <p>The first two requirements are fulfilled by our multimodal
UI model with different presentations and input possibilities
as described in the next subsection. Afterwards we show
how this model is used to create multimodal UIs by
selecting the most appropriate combination of IRs
(requirements 3 to 5).</p>
      <p>
        We first specify the workflow of the example with a task
model (we use an extended version of Concurrent Task
Trees for the definition [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]) and thus start with the
definition of the “ConfirmSelection”-task (T1:
ConfirmRecipe). Afterwards, the abstract interaction(s) for
each task is specified by choosing between OutputOnly,
FreeInput, Choice and Command (similiar to UsiXML and
TERESA) or ComplexInteractor to aggregate several
abstract interactions. So we choose one abstract interaction
object for the presentation of the selected recipe
(A1:OutputOnly) and one for the confirmation
(A2:Selection). The abstract interactors are connected via
mappings to the “ConfirmSelection”-task (see Figure 2).
This is one huge difference between transformational
approaches like UsiXML and the executable models
approach. Because of the parallel execution of all models
(task, abstract and concrete), the information from all
models is available and does not need to be transformed
from one model into another. Each model only contains the
information of its abstraction level and information from
different models is connected via mappings.
MULTIMODAL EXECUTABLE UI MODEL
The distribution of UIs at runtime requires certain
information about the UI at runtime (part of requirement 1
and 2). To achieve this, our approach is based on the notion
of executable models that combine the static design
information, execution logic and runtime state information
of the UI [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This allows to execute and observe their
status at runtime as well as to access design information.
Our set of metamodels for specifying distributable
multimodal UIs follows the Cameleon reference
Framework [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and thus we distinguish tasks and concepts,
abstract interface, concrete interface and final interface as
also done in TERESA and UsiXML (see Figure 2). To
show how to develop a UI with our set of models, we
explain the short example presented in Figure 2. The
example is an excerpt from our cooking assistant and
models a recipe selection scenario. The presentation of the
recipe is possible via different modality combinations and
can be confirmed via several input styles.
In the next step the concrete interaction is specified by
using the concrete input and concrete output model. The
separation of input and output is another difference to other
approaches but allows the independent addressing of single
IRs (requirement 2). However, it requires to handle IRs
with combined input and output like touchscreens. By
utilizing the CARE properties, developers can specify their
intention on how to combine the different modalities
(requirement 1). Defining multimodal relations with the
CARE-properties is similar to the ICARE software
components [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In contrast to ICARE however, the
components and thus the multimodal relationships are not
statically related at design time but can be freely configured
between arbitrary modalities through the integration in the
interaction metamodel and evaluated at runtime.
      </p>
      <p>To present the recipe (A1:OutputOnly) the developer
chooses two different presentation possibilities: one for
graphical output (Picture and Text) and one with additional
natural language output (Audio). Each possibility is
aggregated by a complex interactor and the
Complementarity-attribute means that the children
complement each other and must be presented together.
Both possibilities are also aggregated by a complex
interactor with an Equivalence-attribute, marking both
possibilities as equivalent. The confirmation of the recipe
(A2:Selection) has only a graphical presentation (Button)
which is directly mapped to the abstract interactor.</p>
      <p>Beneath the used possibilities, the concrete output model
supports SignalOutput-elements to include more limited
modalities like blinking lights or haptic feedback
(vibration) and DynamicOutput to create multiple output,
e.g. for a dynamically created number of elements.
To confirm the recipe, the developer provides three
possibilities (Gesture, Speech and Pointing), which are
aggregated by a complex interactor with an
Equivalenceattribute, defining that they all can be used to provide the
same input to the system (Figure 2). The next section
describes how the modeled description is used within a
runtime architecture to deliver flexible multimodal
interaction.</p>
      <p>RUNTIME DISTRIBUTION
Based on the needed components and the requirements that
need to be fulfilled to realize the anticipated dynamic
distribution at runtime, we developed and implemented the
runtime architecture depicted in Figure 3. The different
components and their behavior are described next.
On the execution of the set of models, the central task
model calculates a set of active tasks. This triggers the
mappings that are connected with each task and results in
the activation of the mapped abstract interactors. The
mappings connected to the abstract interactors are in turn
triggered and the result is a set of active complex CUI
elements in the concrete input and output model provided to
the distribution component.</p>
      <p>
        The second information provider is a context model that
includes different observers to get information about the
available IRs (requirement 3). IRs are connected to our
runtime system via so called channels [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. One channel is
responsible for establishing and maintaining the connection
to one IR (if needed via a network). This includes not only
the registration within the context model but also the
capability to receive and send information to the IRs. This
concept allows the independent addressing of the IRs over
platform borders (requirement 4).
      </p>
      <p>We have implemented different channels which connect
various interaction technologies, including browsers for
graphical output through an AJAX-based channel
implementation or connect automatic speech recognition
via Dragon Natural Speaking and Text-To-Speech engines
by implementing the Media Resource Control Protocol
(MRCP). All models (user interface and context) are
implemented with the Eclipse Modelling Framework
(EMF). The direct mapping of EMF to Java supports the
bridging of model and the distribution component on the
implementation level which allows the distribution
component to observe the models for state changes.
The distribution component is notified whenever a new set
of concrete interactors is activated, and matches the
supported modalities to the available modalities of the
available IRs by adhering to the following goals:
Input: support as many (equivalent) input resources as
possible while considering the specified CARE relations
between the input elements. This aims at leaving the control
about the used IRs to the user by supporting a wide range.
Output: find the most suitable combination of output
resources while considering the specified CARE relations
between the output elements. Distributed output thus aims
at utilizing the most suitable combination of IRs to convey
the UI. The selection of IRs depends on their capabilities
and context information like the resource location.
The algorithm first determines the IRs that can be utilized
by the user. Therefore the available IRs are queried from
the context model together with information about the
premises and localization and direction information. Based
on the type of the IRs, the algorithm calculates if the
resources are currently usable. E.g. displays are considered
usable when they are within the vicinity of a user and
haptical input IRs are considered usable when they are
within the range of the user. The resulting set of usable IRs
determines the usable modalities and thus the types of
concrete interactors that can be distributed.</p>
      <p>
        In the next step the algorithm analyzes the CARE relations
of the active concrete interactors. The specified UI model
contains trees of complex interaction elements with simple
elements as leaf nodes. As only the leaf nodes have to be
distributed, the relations defined by their parent complex
interactors influence their distribution. The simple
interactors are automatically of type "assigned" and can
thus be directly distributed if a corresponding type of IR is
available. Interactors combined via complex elements of
type complementary or redundant must be distributed
together to reflect their meaning. This means that to make
an interaction, defined as redundant, available to the user,
all modalities addressed by the childs of the complex
interactor have to be available. The equivalence relation is
used to specify different (combinations of) interactors that
transport the same information in case of output or allow
the user to provide the same information in case of input.
This makes the system more reliable and reduces ambiguity
and inconsistency. With respect to the distribution goals
specified above, a different handling of the equivalency
relation for input and output has been realized. For input the
distribution of as many equivalent interactors as possible
results in more possibilities for the user to provide the
needed input. For output a selection of the most feasible
interactors avoids confusion and unwanted redundancy.
Based on these interpretations of the CARE relationships
the algorithm first calculates the distribution of the output
interactors. The algorithm decides between the different
equivalent interactor combinations by selecting the one
supporting the most modalities. This is based on the
assumption that the designer utilizes the advantages of each
modality, so that more modalities result in a better
presentation. More sophisticated extensions that consider
additional context information are currently evaluated.
Afterwards, the distribution of input interactors is
calculated. Thereby the algorithm distributes all elements
that are supported by the usable IRs to allow as many input
possibilities as possible. It is crucial that during the
distribution of the input interactors the algorithm pays
attention to coupled input and output as e.g. in case of a
touchscreen. The resulting distribution configuration
consists of tuples of concrete interactors and IR references.
Before sending the interactors to the channels, the
presentation of the output interactors is accomplished by a
layouting algorithm [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] which takes into account the
spatial and temporal relationships of the interactors as well
as the workflow model to not scatter related interactors.
In case of the little example, the algorithm would distribute
the interactors as follows: For input the algorithm tries to
support as many IRs as possible and thus determines at
maximum the gesture, voice input and pointing interactors
to support a keyboard, a microphone and a mouse
respectively. For output a screen is required and an optional
loudspeaker would be integrated if available. The algorithm
would adapt the distribution accordingly, when e.g. the user
is changing the position and the distribution component
determines that some IRs are no longer available to the user
and others just became available.
      </p>
      <p>CONCLUSION
We presented an approach for dynamically selecting the IRs
at runtime. Based on the modeled modality relations
defined as CARE properties, which are available at runtime
due to the utilization of executable models, and information
about the actual context, a distribution algorithm calculates
the most appropriate set of IRs.</p>
      <p>In the future we plan to develop a multimodal widget set to
ease the development of such multimodal and distributable
UIs. We also want to analyze further factors that influence
the distribution algorithm. Furthermore, automatic
calculation raises the problem of unsatisfactory results. To
overcome this issue for distribution of UIs, we started the
development of a meta user interface allowing users to
configure the distribution according to their needs.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>N.</given-names>
            <surname>Barralon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Coutaz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Lachenal</surname>
          </string-name>
          .
          <article-title>Coupling interaction resources and technical support</article-title>
          .
          <source>In HCI International</source>
          <year>2007</year>
          , Volume
          <volume>4555</volume>
          <source>of LNCS</source>
          , pages
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>S.</given-names>
            <surname>Berti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Correani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          .
          <article-title>Teresa: A transformation-based environment for designing and developing multi-device interfaces</article-title>
          .
          <source>In CHI</source>
          <year>2004</year>
          ,
          <string-name>
            <surname>volume</surname>
            <given-names>II</given-names>
          </string-name>
          , pages
          <fpage>793</fpage>
          -
          <lpage>794</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>M.</given-names>
            <surname>Blumendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Feuerstack</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Albayrak</surname>
          </string-name>
          .
          <article-title>Multimodal user interaction in smart environments: Delivering distributed user interfaces</article-title>
          .
          <source>In Constructing Ambient Intelligence</source>
          ,
          <article-title>AmI 2007 Workshops Darmstadt</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>M.</given-names>
            <surname>Blumendorf</surname>
          </string-name>
          , G. Lehmann,
          <string-name>
            <given-names>S.</given-names>
            <surname>Feuerstack</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Albayrak</surname>
          </string-name>
          .
          <article-title>Executable models for human-computer interaction</article-title>
          .
          <source>In Proc. of the DSV-IS Workshop</source>
          <year>2008</year>
          , pages
          <fpage>238</fpage>
          -
          <lpage>251</lpage>
          , Berlin, Heidelberg,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>J.</given-names>
            <surname>Bouchet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nigay</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Ganille</surname>
          </string-name>
          .
          <article-title>Icare software components for rapidly developing multimodal interfaces</article-title>
          .
          <source>In Proc. of ICMI</source>
          <year>2004</year>
          , pages
          <fpage>251</fpage>
          -
          <lpage>258</lpage>
          , New York, USA,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>G.</given-names>
            <surname>Calvary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Coutaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Thevenin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Limbourg</surname>
          </string-name>
          , Laurent Bouillon, and
          <string-name>
            <given-names>Jean</given-names>
            <surname>Vanderdonckt</surname>
          </string-name>
          .
          <article-title>A unifying reference framework for multi-target user interfaces</article-title>
          .
          <source>In Interacting with Computers</source>
          ,
          <volume>15</volume>
          (
          <issue>3</issue>
          ):
          <fpage>289</fpage>
          -
          <lpage>308</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>J.</given-names>
            <surname>Coutaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nigay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Salber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Blandford</surname>
          </string-name>
          , J. May, and
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          .
          <article-title>Four easy pieces for assessing the usability of multimodal interaction: The care properties</article-title>
          .
          <source>In INTERACT 1995</source>
          , pages
          <fpage>115</fpage>
          -
          <lpage>120</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>A.</given-names>
            <surname>Demeure</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Sottet</surname>
          </string-name>
          , G. Calvary,
          <string-name>
            <given-names>J.</given-names>
            <surname>Coutaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ganneau</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderdonkt</surname>
          </string-name>
          .
          <article-title>The 4c reference model for distributed user interfaces</article-title>
          .
          <source>In ICAS 2008</source>
          . IEEE Computer Society Press.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>C.</given-names>
            <surname>Elting</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Hellenschmidt</surname>
          </string-name>
          .
          <article-title>Strategies for selforganization and multimodal output coordination in distributed device environments</article-title>
          .
          <source>In Workshop on Artificial Intelligence in Mobile Systems</source>
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>S.</given-names>
            <surname>Feuerstack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blumendorf</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Albayrak</surname>
          </string-name>
          .
          <article-title>Prototyping of multimodal interactions for smart environments based on task models</article-title>
          .
          <source>In Constructing Ambient Intelligence</source>
          , AmI 2007 Workshops.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>Q.</given-names>
            <surname>Limbourg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderdonckt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Michotte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bouillon</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>López-Jaquero</surname>
          </string-name>
          .
          <article-title>Usixml: A language supporting multi-path development of user interfaces</article-title>
          .
          <source>In EHCI/DSVIS</source>
          , Volume
          <volume>3425</volume>
          <source>of LNCS</source>
          , pages
          <fpage>200</fpage>
          -
          <lpage>220</lpage>
          .
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>J.P.</given-names>
            <surname>Molina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderdonckt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fernández-Caballero and M.D. Lozano</surname>
          </string-name>
          .
          <article-title>Rapid prototying of distributed user interfaces</article-title>
          .
          <source>In Proc. of CADUI'2006</source>
          , pages
          <fpage>151</fpage>
          -
          <lpage>166</lpage>
          . Springer-Verlag,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>