<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Feb</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Context-Aware Proactive Controller for Smart Environments</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Frank Krüger</string-name>
          <email>frank.krueger2@uni-rostock.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gernot Ruscher</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Bader</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thomas Kirste</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universität Rostock</institution>
          ,
          <addr-line>Albert-Einstein-Str. 21, 18059 Rostock</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2011</year>
      </pub-date>
      <volume>13</volume>
      <issue>2011</issue>
      <abstract>
        <p>In this paper we describe an implicit user interface for smart environment control: We make our system guess how to assist the user(s) proactively. Our controller is based on two formal descriptions: One that describes user activities, and another that specifies the devices in the environment. Putting both together, we can synthesize a probabilistic model, the states of which resemble activities performed by the user(s) and are annotated with sequences of device actions, with the latter to be executed in cases particular activities have been recognized. The resulting system is purely reactive and can be executed in real time.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;intention recognition</kwd>
        <kwd>HMM</kwd>
        <kwd>planning</kwd>
        <kwd>smart environments</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>
        As computers become smaller and smaller, the vision of ubiquitous
computing becomes true. At the same time, smart environments
contain a large number of devices and become thus more and more
complex. Thus, configuration as well as correct usage gets more
time consuming and error prone. Exploring new ways to control
these invisible devices is a challenge addressed by current research
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Our approach is to create an entirely reactive system to control
all devices of the environment by inferring the intentions of the
user. The system gives support by controlling the devices the way
the user would do to achieve his goals. We use a semantic modeling
of the user and the environment to assure that the support is sound
and complete, in the sense that the environment is able to support
the user correctly in every recognizable situation.
To proactively support users in instrumented environments, we need
to infer their intentions, the goals behind their current activities.
Here we use a rather technical notion of intention: given
descriptions of complex actions, like giving a presentation or preparing
a meal. If we detect the user performing some sub-tasks of these
complex action, we assume that his goal is to perform the complex
action completely. A controller such as described requires all
calculation to be executed in realtime. It is therefore necessary to move
time consuming operations like planning processes from runtime to
compile time. Thus, we can create a purely reactive controller with
time-bounded complexity, able to control the environment in every
possible situation.
      </p>
      <p>As illustrating example in this paper we use the task of giving a
presentation inside our smart meeting room. This environment is
introduced below. The graphical representation of this task is given
in Figure 1. Here the task of giving a presentation decomposes to a
sequence of sub-tasks. The user starts the presentation with
entering the room and moving to the front of the room. When the
presentation is finished the user moves to the door to leave the room.
A more detailed description of this example is given in section 3.</p>
    </sec>
    <sec id="sec-2">
      <title>2. PRELIMINARIES</title>
      <p>The controller described below is based on semantic models of the
user and its environment. For our system we currently employ
formal action descriptions and task models which are compiled into a
probabilistic model. All necessary concepts are briefly introduced
below.</p>
      <p>
        Hidden Markov Models (HMMs) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] are probabilistic models, that
allow to infer a state of a system that is not observable directly,
but through noisy or ambiguous sensor data. An HMM defines a
probabilistic model, that consists of a finite number of states, each
containing a probability distribution function over sensor
observation, that allow to conclude the system state given sensor data. To
describe temporal behavior of a system an HMM specifies
probabilities for state transitions. HMMs are state of the art methods for
activity recognition.
      </p>
      <p>
        Depending on the available sensors, we can detect the current
activity of users. For example, an indoor positioning system can be used
to detect whether a user is entering the room and heading for the
presentation stage. As customary in activity and intention
recognition, we use probabilistic models. Such models can cope with
noisy and contradictory sensor data and allow nonetheless to infer
the most likely sequence of actions or complex intention. Here,
we use Dynamic Bayesian Networks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], such as HMM’s for
prob
      </p>
      <sec id="sec-2-1">
        <title>1. Give</title>
      </sec>
      <sec id="sec-2-2">
        <title>Presentation</title>
      </sec>
      <sec id="sec-2-3">
        <title>2. Enter</title>
      </sec>
      <sec id="sec-2-4">
        <title>Room</title>
        <p>&gt;&gt;</p>
      </sec>
      <sec id="sec-2-5">
        <title>4. Move</title>
        <p>Front
&gt;&gt;</p>
      </sec>
      <sec id="sec-2-6">
        <title>5. Present</title>
        <p>&gt;&gt;</p>
      </sec>
      <sec id="sec-2-7">
        <title>7. Move Door &gt;&gt; 8. Leave Room</title>
        <p>
          abilistic modeling. Calculating a probability distribution over the
current state with respect to the observed sensor data as well as the
previous state is known as filtering. Doing this requires a model,
that describes both, the behavior of the user and the sensor data
observable. In addition of recognizing the activity these methods
allow to predict future activities, in this case intentions, of the user.
Complex behaviors of (groups of) users can formally be described
using CTTE [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] or CTML-models [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], which basically are a
hierarchical description of tasks. Sub-tasks can be set into a temporal
relation of each other. CTML utilizes temporal operators as the
sequence operator (»), the order independence operator (|=|), the
concurrent operators (|||) and others that are not used with the
examples in this paper. The Collaborative Task Modeling Language
(CTML) is especially designed for smart environments and offers
features for team modeling, location modeling, device modeling
and domain modeling. As described in section 4, we can transfer
such a description into a probabilistic model allowing to recognize
the current complex action, and thus allowing to infer the overall
intention of a sequence of actions.
        </p>
        <p>
          In the planning domain definition language (PDDL) [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], device
actions are formalized as 4-tuples: hName, Parameters,
Preconditions, Effectsi. Based on such a formal description we can use
standard AI planning techniques to infer a plan (sequence of
actions) leading from the current to the desired state of the world.
Figure 2 and 3 show examples for PDDL descriptions.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. AN APPLICATION EXAMPLE</title>
      <p>
        The environment where most of our experiments take place is the
so called Smart Appliance Lab. This room is instrumented with
various sensors such as the location tracking system Ubisense [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Thus, the location of different users is given by the environment.
Other parts of our experimental environment are actuators such as
projectors and canvases. Here both sensors and actuators are called
devices. Software counterparts of all these devices are provided by
the middleware implemented for this environment. These software
devices enable us to gain the status of each device inside the room
to create a world state. The world state of our environment is thus
comprised of the sensor observations and the device states. The
      </p>
      <sec id="sec-3-1">
        <title>Projector1 on</title>
      </sec>
      <sec id="sec-3-2">
        <title>Canvas1 down</title>
        <p>
          true
false
true
TT
FT
false
TF
FF
environment as well as the middleware controlling the devices of
the environment are described in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
Since the experimental environment may be used as smart meeting
room, a typical application is giving a presentation. In this scenario
the user first enters the room. For our example we assume that the
room contains one projector and one canvas. After the user moves
to the front of the room where the canvas is located, he prepares the
environment for his presentation. Therefore he has to plug in the
notebook, set up the projector and lower the canvas. After this is
done the user starts his talk and finishes it by moving to the door.
Finally the user leaves the room. This example is kept simple to
illustrate the main points. The real environment is comprised of
eight projectors and eight canvases.
        </p>
        <p>The task specification in Figure 1 contains a detailed description of
this example. The annotated effects (illustrated as clouds) describe
the desired state of the environment for the following sub-tasks. As
description language for task models we use CTML, as described in
section 2. The graphical representation of the task model omits the
description of the observation data and the priority function. The
world state in this example is given in Table 1 and only consists
of the two devices. Each of them has a binary state, in case of the
projector it is either turned on or turned off. The canvas can be up
or down.</p>
        <p>Our goal is now to build a controller that recognizes the current
state of the user and executes corresponding device actions that
makes the annotated effects come true. By executing these action
sequences that system automatically assists the user in achieving
his goals.
(:action canvasdown
:parameters (?c - canvas) :
:precondition (not (isdown ?c))
:effect (isdown ?c ))
(:action projectoron
:parameters (?p - projector) :
:precondition (not (ison ?p))
:effect (ison ?p))</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. A CONTEXT-AWARE PROACTIVE CON</title>
    </sec>
    <sec id="sec-5">
      <title>TROLLER</title>
      <p>This section explains how to combine formal descriptions of the
environment and the user behavior into a purely reactive probabilistic
model. First the description of the user is compiled into a
probabilistic model allowing to recognize the user’s intentions. Then, we
enrich this model by annotating the states with actions, executable
by the environment. While running the system, and based on the
current state of the environment one of the states will be most likely.
The actions attached to the state are then simply executed,
resulting in a system supporting the user while achieving his high-level
goals.</p>
      <p>In the following sections we discuss two different possibilities of
enriching the model with device actions. The first is to generate
different HMM states for each possible world state. The latter is to
annotate the corresponding state with sequences of device actions
for all possible world states. A plan for the current world state is
then accessible by taking the world state as key for a lookup table
resulting in the corresponding plan.</p>
    </sec>
    <sec id="sec-6">
      <title>4.1 From Symbolic to Probabilistic Models</title>
      <p>
        We start with the annotated task model from Figure 1, which
consists of a task model and effects annotated to sub-tasks. Each effect
is a subset of the world state. It consists of the cartesian product
of all device states. We apply the transformation given in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This
is done by parsing the syntax tree of the annotated task model and
applying the inference rules for each of the temporal operators. The
states of the resulting annotated HMM correspond to tasks of the
task model with corresponding effects. The whole model captures
all possible (with respect to the task model) sequences to complete
the root task.
      </p>
      <p>We extend the original task model by annotating tasks with their
effects with respect to the world state. The effect of the annotated
task should be true after executing the task. This can be done by
either the user or the controller. Figure 1 contains the additional
effect specification for the Move Front and the Move Door
subtasks. In our scenario the effect of the state Move Front is that the
environment is prepared for the presentation, namely the canvas is
down and the projector is on.</p>
      <p>To ensure the effects of an annotated HMM state become true, the
controller has to execute device actions depending on the current
world state. The canvas has to be lowered if it is up but if the
projector is already turned on we can omit turning on the
projector. Therefore we have to generate sequences of devices actions for
each possible situation. This is done by taking each world state as
start situation for a planner and the desired subset of the world state,
described by the effects of the annotated HMM state as goal. To
realize this the planner takes device action specifications, as shown in
Figure 2 and Figure 3. Result of this planning step is a sequence of
device actions for each possible world state that has to be executed
to create the effects specified in the original annotated task model
in Figure 1.</p>
      <p>The next two sections describe how to use these world state device
action sequence pairs to generate the controller. Both approaches
follow the workflow given in Figure 5.</p>
    </sec>
    <sec id="sec-7">
      <title>4.2 Unfolding HMM states</title>
      <p>In order to create the distinction of the different world states as
HMM states, it is necessary to unfold annotated HMM states by
using the different world states. Therefore we replace the
annotated HMM state by extended HMM states that are generated from
each possible world state and the state itself. In our example Move
Front will be replaced by each element of the cartesian product of
the world state to ensure that each observation of a world state
corresponds to one HMM state. Here Move Front is replaced by four
new HMM states, each representing a possible world state. Only
the HMM state that covers the complete effects has a transition to
the HMM state generated from the following sub-task. In our
example only the Move Front TT state, that assumes that the projector
is on and the canvas is down has this transition.</p>
      <p>This allows to attach plans to the states in the probabilistic model
that needs to be executed in that state as follows: Every annotated
user state is combined with every world state, that is a combination
of all states of the devices. This world state is used as precondition
for device actions during compilation process.</p>
      <p>The generated HMM contains so called slices, that consists of all
states generated from one sub-task from the CTML specification.
One slice itself was created from the cartesian product of the world
state. The intra-slice states differ from each other only by the
possible observations of the world state. The names of the intra-slice
states illustrated in Figure 4 contain the true/false value given in
Table 1. Every state of the slice has an incoming intra-slice transition
with a probability given by the number of states. Only the states
that create the effects given in task model description have a
outgoing inter-slice transition with very high probability. It is possible
ProjectorOff (projector1)</p>
      <p>Leave
Room
that there are more than one state that covers the same effect due
to missing effects to single devices of the world. The probabilities
of the intra-slice transitions are just given by the number of target
states. Inter-slice transitions are generated with respect to the
temporal operator of the sub-tasks. The probability of transitions are
generated by the normalized weight of the single sub-tasks.</p>
    </sec>
    <sec id="sec-8">
      <title>4.3 Lookup table</title>
      <p>Another approach to enrich the HMM with device action sequences
for user assistance is to create a lookup table for necessary device
actions and attach it to the corresponding annotated HMM state.
As in the first approach the device action sequences depend on the
current world state and need to be generated by a planner that uses
the specified effects as goals. Attaching this device action sequence
- world state pairs to the annotated HMM state provides a lookup
table at runtime. Using the world state, consisting of all device
states the table provides the pre-generated device action sequence
that has to be executed in order to make the specified effects to the
environment become true.</p>
      <p>In our scenario the states Move Front and Move Door are extended
by lookup tables for user assistance. The table annotated to the
Move Front state contains device action sequences that ensure that
the projector is turned on and the canvas is down. The Move Door
HMM state is annotated with a table of device action sequences
that ensure that given any world state the projector is turned off
and the canvas is up. Figure 6 contains a graphical representation
of the generated extended HMM. Probability distribution functions
as well as state transition probabilities are omitted for reasons of
clarity.</p>
    </sec>
    <sec id="sec-9">
      <title>4.4 Choosing one method</title>
      <p>The previous sections describe two approaches to attach pre-generated
plans to states of a probabilistic model, namely an HMM. Both
approaches generate realtime capable systems, that do not have to
solve planning problems such as finding a sequence of device
actions to support the user. The first approach creates an HMM with
Initial</p>
      <p>Enter
Room</p>
      <p>Move
Front</p>
      <p>Move
Front</p>
      <p>Leave
Room
TT
TF
FF
FT</p>
      <p>ProjectorOn (projector1)
CanvasDown (canvas1)
ProjectorOn (projector1)
CanvasDown (canvas1)</p>
      <p>TT
TF
FF
FT</p>
      <p>CanvasUp (canvas1)
ProjectorOff (projector1)
CanvasUp (canvas1)
ProjectorOff (projector1)
very much states, because it creates states for every possible world
state. Here the distinction of the world state is done at the state
level. The idea of unfolding HMM states by using every possible
world state is appropriate if the state of a device is not reliable or
noisy.</p>
      <p>The second approach is to move the distinction of the world states
from different HMM states with attached plans to one HMM state
that contains a table of multiple plans, one for each possible world
state. The number of HMM states is independent from the world
state, which avoids a very high number of states. This approach
is applicable whenever the world state is known definitely. This
means that each device state is observable without any noise or
inconsistency.</p>
    </sec>
    <sec id="sec-10">
      <title>5. THE EXECUTION ENVIRONMENT</title>
      <p>We developed an execution framework for Bayesian inference that
is able to perform fast online filtering of HMM’s and particle
filters. By separating the model description from the implementation
of the algorithms we designed a highly reusable framework. This
framework enables users to embed parameterized filters into
different environments. This enables us to integrate the generated
controller into the software structure described above. Users of this
environment only need to implement problem specific details.
A probabilistic model for filtering in our framework has to be
implemented in C++. One has to describe three different parts. First a
specification of the state space, that is in the case of an HMM
represented by a set of states. Second the transition probabilities from
each state to another, represented as matrix of probabilities. Third
a probability distribution of sensor observations for each state. To
provide a more intuitive tool for describing HMM’s we introduced
a description language that supports a simple way to describe HMM
based models.</p>
      <p>
        The compilation process creates the state space of our controller
from the single sub-tasks of the user model combined with each
possible device state combination. The transition probabilities are
given by the probabilities of the model generated only by the
subtasks as described in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and the observation probability
distributions of the original approach are extended by an observation of the
device states in the extended state. A model specified in the way
needs sensor data and the world state as input and provides a
sequence of device actions as output. These device actions need to be
executed in order to support the user.
      </p>
    </sec>
    <sec id="sec-11">
      <title>6. SUMMARY AND OPEN PROBLEMS</title>
      <p>In this paper we showed that a context-aware controller for smart
environments can be created from a combination of semantic
models of the user and the environment. The controller is based on
bayesian inference where the model was generated from task-based
specification of users together with a precondition and effect
specifications of each device forming the environment. Sensor data as
well a accumulated world state serve as input, a device action
sequence, that needs to be executed as output of the inference
process. We introduced two ideas to merge task based user models and
precondition and effects specification of the environment to create
probabilistic models that assist the user.</p>
      <p>Further research should include smart environment evaluation of
the controller described here. Both approaches should be evaluated
and the results should be compared for different devices and
scenarios. This includes tests for maximum manageable complexity
of the state space as well as minimal complexity that creates
sufficient user support. Due to the compile time planning process we
are able to pre-generate action sequences. This allows to find
modeling problems such as deadlocks at compile time. Our approach in
this paper utilizes HMM’s for inference. However, since the state
space may explode and exact inference will not be suitable, we can
change the inference algorithm to Monte Carlo based methods such
as particle filters. These methods are already supported by the
execution environment described above.</p>
      <p>The controller introduced here is described as central service. It
is possible to decentralize this approach to the usage of multiple
services, each of them describing a subspace of the model. By
comparing the likelihood of multiple services, it is be possible to
choose either an action sequence of one agent or a combination of
multiple sequences that do not disturb each other.</p>
      <p>Another point that should be analyzed is how both systems
behave if only the most probable device action sequences will be
preplanned. If the system reaches a state that does not contain a device
action sequence the plan has to created at runtime. The realtime
behavior of this extension has to be examined.</p>
    </sec>
    <sec id="sec-12">
      <title>Acknowledgements</title>
      <p>Frank Krüger’s work in the MAXIMA project as well as Gernot
Ruschers’s work in the MAIKE project are both supported by
Wirtschaftsministerium M-V at expense of EFRE and ESF.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1] http://www.ubisense.de, JUN
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[2] Ubicomp '10: Proceedings of the 12th ACM international conference on Ubiquitous computing</source>
          , New York, NY, USA,
          <year>2010</year>
          . ACM.
          <volume>608109</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bader</surname>
          </string-name>
          , G. Ruscher, and
          <string-name>
            <given-names>T.</given-names>
            <surname>Kirste</surname>
          </string-name>
          .
          <article-title>Decoupling smart environments</article-title>
          . In S. Bader,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kirste</surname>
          </string-name>
          , W. G.
          <article-title>Griswold, and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Martens, editors,
          <source>Proceedings of PerEd2010</source>
          , Copenhagen,
          <string-name>
            <surname>SEP</surname>
          </string-name>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C.</given-names>
            <surname>Burghardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wurdel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bader</surname>
          </string-name>
          , G. Ruscher, and
          <string-name>
            <given-names>T.</given-names>
            <surname>Kirste</surname>
          </string-name>
          .
          <article-title>Synthesising generative probabilistic models for high-level activity recognition. In Activity Recognition in Pervasive Intelligent Environments</article-title>
          . Atlantis Press, Paris, France,
          <year>2010</year>
          . To appear.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Mori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paterno</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Santoro</surname>
          </string-name>
          . Ctte:
          <article-title>Support for developing and analyzing task models for interactive system design</article-title>
          .
          <source>IEEE Transactions on Software Engineering</source>
          ,
          <volume>28</volume>
          :
          <fpage>797</fpage>
          -
          <lpage>813</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Murphy</surname>
          </string-name>
          .
          <article-title>Dynamic Bayesian Networks: Representation, Inference and Learning</article-title>
          .
          <source>PhD thesis</source>
          , University of California, Berkeley, CA, USA,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L. R.</given-names>
            <surname>Rabiner</surname>
          </string-name>
          .
          <article-title>A tutorial on hidden markov models and selected applications in speech recognition</article-title>
          .
          <source>In Proceedings of the IEEE</source>
          , pages
          <fpage>257</fpage>
          -
          <lpage>286</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russell</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Norvig. Artificial Intelligence</surname>
          </string-name>
          :
          <string-name>
            <given-names>A Modern</given-names>
            <surname>Approach</surname>
          </string-name>
          . Prentice Hall,
          <volume>3</volume>
          <fpage>edition</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wurdel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sinnig</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Forbrig</surname>
          </string-name>
          . CTML:
          <article-title>Domain and Task Modeling for Collaborative Environments</article-title>
          .
          <source>J. UCS</source>
          ,
          <volume>14</volume>
          (
          <issue>19</issue>
          ):
          <fpage>3188</fpage>
          -
          <lpage>3201</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>