<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Quality of Context: Handling Context Dependencies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tobias Zimmer Telecooperation Office (TecO)</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universit a ̈t Karlsruhe</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Context dependencies are one of the major problems in future ubiquitous computing environments. In this paper we introduce Genetic Relation of Contexts (GRC), a lightweight distributed algorithm to analyze interdependencies of context data in an efficient decentralized manner. We present first results of the evaluation of the system, proposing a relevant increase in context quality by the use of GRC, while at the same time energy consumption of computation tasks can be reduced. In our experiments GRC was able to reduce network traffic by up to 60% by filtering low quality contexts.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The next step towards the original vision of Ubiquitous
Computing [1] will be to use the available technologies to
build personalized context-aware applications and bring them
together to form large-scale multi-user ubiquitous
computing environments. This new class of environments will
consist of a heterogeneous set of applications and artifacts –
everyday objects augmented with ubiquitous computing
technology – from different sources. The environments will be
highly dynamic in terms of interacting applications, as
mobile users will enter and leave the scope of an environment
continuously, taking some of their personalize applications
and services with them. Processing context in these
environments needs interoperability of applications from
different sources to allow for sufficient diversity in services to
attract users. Additionally interaction in context has to be
manageable on a common basis to handle the complexity of
highly dynamic large-scale ubiquitous computing
environments that are hosting many heterogeneous applications.
Today most context-aware systems have comparably flat
architectures: Sensor values are processed to contexts and
theses contexts are used to adept the behavior of appliances.
Only very rarely settings are encountered featuring multi
stage context processing; settings where context information
is communicated and then fusioned or aggregated to derive
new contexts. This will most probably change soon. In
large environments personalized context-aware application
will need to propagate the context information they can
access and derive to provide the user with benefits from all
available services cooperating. High level, abstract context
information will become more important to provide
sophisticated context-aware services [
        <xref ref-type="bibr" rid="ref1">2, 3</xref>
        ].
      </p>
      <p>
        Scaling effects are subject to research in networking and
sensor networks. Focusing on hardware and protocol issues,
current research in ubiquitous computing does rarely
investigate scaling effects in context processing. Most published
context processing architectures (context models) are not
capable of handling degression in context quality that is
caused by a multi-step aggregation processes – most
provide no tools for handling context reliability at all.
In large-scale ubiquitous computing environments a new class
problems based on the highly dynamic and modular
architecture is encountered. Determining ”processing trees” –
graphs that reflect the path and evolution of context
information – will not be possible anymore due to the high
complexity. And so new ways of handling problems
context dependencies, like cyclic usage of context and splitting
and multiplication of context, have to be found. System
engineers will also have to cope with problems that have
already been identified to be of relevance for advanced context
processing like locality of context and ageing of context [
        <xref ref-type="bibr" rid="ref2 ref3">4,
5</xref>
        ].
      </p>
      <p>This new class of scaling induced problems in context
processing need to be handled efficiently. Otherwise a large
number of interdependent contexts of low quality will be
communicated and processed, consuming energy and
bandwidth in ubiquitous computing environments, and leading to
unacceptably low context recognition rates of applications.
In this paper we present a lightweight context management
algorithm that provides a solution to the above mentioned
problems of cyclic usage of context and splitting and
multiplication of context. It filters interdependent contexts of low
quality without the need to extract and analyze context
processing trees of ubiquitous computing environments. The
algorithm is self-contained and does not need access to any
additional semantic context information. So it can be applied
independently from the used semantic context model,
allowing for a seamless integration into existing context-aware
applications and systems.</p>
      <p>
        First we take a closer look at the problem scenario. Then
Genetic Relation of Contexts (GRC) is introduced. GRC is
an algorithm designed to handle typical problems of context
dependency in highly dynamic large-scale ubiquitous
computing environments. We analyze the function of GRC and
finally present first results of the application of GRC in the
AwareOffice [
        <xref ref-type="bibr" rid="ref4">6</xref>
        ] – our testbed for deploying context-aware
application in the real world – and simulative results
giving an impression of the performance of the system when
applied in larger scale settings.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. INTERDEPENDENT CONTEXTS</title>
      <p>
        The analysis of large application scenarios [
        <xref ref-type="bibr" rid="ref5">7</xref>
        ] and
simulations of large-scale ubiquitous computing setting has
disclosed a class of problems connected to dependencies of
context data. Two of the main representatives of this class of
problems that were identified are called ”splitting and
multiplication” and ”cyclic usage” as stated before.
      </p>
      <p>
        Splitting and multiplication can happen whenever a context
is consumed by many artifacts of a similar kind (see
Figure 1). In this case context a is used in parallel by many
appliances of type A that all produce contexts of type b.
The newly produced contexts are all derived from the same
source data – context a. If an application of type B seeks
to consume this multiply available context type b, it may be
necessary to know whether theses pieces of context
information are independent or not – e.g. if data fusion algorithms
like Kalman filters should be used on the input contexts.
A well known example scenario were splitting and
multiplication becomes relevant resides in the interactive office
domain: To adapt to a meeting situation personal devices like
PDAs and mobile phones are equipped with with a
contextaware application that can adjust the alarm function of the
device according to the current context. If the personal
devices propagate the meeting context they detect and use to
adapt themselves, to provide other appliances with this
abstract information, the meeting context becomes multiplied
by the number of artifact providing it. Other applications
like interactive doorplates [
        <xref ref-type="bibr" rid="ref4">6</xref>
        ] that use the meeting context
to adapt themselves, have to decide whether the multiply
available meeting contexts are independent – in that case it
can be interpreted as an amplification of the context
information, increasing its reliability – or whether all the meeting
contexts where derived on basis of the same source
information. In this case the aggregation of the available meeting
contexts would not increase the reliability of this context.
Cyclic usage means that context data is processed in a way
that an artifact is consumer of a context that is derived –
in one or more steps – from another context that it
produced itself. Cyclic usage of context may not be a problem
if the number of intermediate steps is high or the cycle is
established intentionally. In general it is not possible to rule
out that cyclic usage will yield problems in the applications.
That is especially true if cycles are present, that conserve
contexts over a long time without comprising new contextual
information.
Cycles can easily form in environments when applications
are present that can derive a target context from different
sources. A simple scenario leading to possible cycles is an
extension of the meeting scenario introduced above: If the
personal devices – e.g. PDAs and mobile phones – are set
up to collaborate for saving energy, they can seek to reduce
the needed power for processing by not deriving the
meeting context by themselves, but trying to ”reuse” the
meeting context provided by another artifact. In this case the
personal devices can either derive the meeting context from
other context types available in the environment or consume
a meeting context directly. Still the device would produce
a meeting context itself, leading to a situation in that even
sponge
pen
chair
two devices can establish a permanent meeting context by
taking turns in processing it.
      </p>
      <p>These very basic application scenarios are meant to
abstractly illustrate problems that can be solved by the GRC
system. In real-world environments many applications from
different providers are interwoven by the exchange of
context information. An easy way to cope with the problems of
interdependent contexts would be to analyze the processing
graph of an environment to detect cycles and splits.
Unfortunately there are some major problems with that
approach: Firstly, in highly dynamic environments the
processing graph is not stable. Every time an application leaves
the environment or a new application enters, the context
processing topology changes and so does the processing graph.
Dynamically refreshing and analyzing the processing graph
would generate an enormous overhead. Secondly, in large
settings the processing graphs become complex, consuming
much time and energy to analyze. And thirdly, extracting
the processing graph from an environment and analyzing it
in its whole, needs a general understanding of all contexts
that are communicated and processed in the environment.
To illustrate the complexity of the task, Figure 3 shows
a reduced processing graph (subscription graph) from the
AwareOffice. The subscription graph only shows the
processing dependencies on basis of the types of artifacts and
types of contexts they use.</p>
      <p>In this reduced type of processing graph only the cycle of
meeting contexts established by the PDAs can be detected
directly. Other possible cycles in this graph, like the ones
established by the doorplate and digital camera or the
doorplate, digital camera and PDAs are harder to detect.
Originally the setting shown here consists of 5 cups, 6 chairs,
three tables, three windows, two pens, one sponge, one
digital camera, one doorplate, on air condition and 5 PDAs;
altogether an environment of only 28 artifacts.</p>
      <p>A section of the complete processing graph is shown Figure
4. It only contains the cups, chairs, pens, PDAs, doorplate
and camera. Even though the graph contains all the
information on context dependencies, the splitting is hard to
detect, because not every single piece of context information
is represented as a separate node, but contexts of the same
type are subsumed in one node. Still this graph
representing only 20 artifacts and their contexts gives an impression
of the complexity of processing dependencies in real-world
environments.</p>
      <p>As a solution to finding cycles and splittings in large-scale
ubiquitous computing environments we propose Genetic
Relation of Contexts (GRC). This algorithm provides
information on the interdependencies of contexts with minimal
overhead in terms of computing and memory usage. The
main advantage of GRC is, that no information on the
topology of context processing is needed, allowing to run it in a
distributed manner without centralized management. The
algorithm is implemented as part of the software stack of
every artifact. By providing common service access points
(SAPs), GRC can communicate dependency information to
the used semantic context processing layer.</p>
    </sec>
    <sec id="sec-3">
      <title>3. GENETIC RELATION OF CONTEXTS</title>
      <p>Genetic Relation of Context is a method based on ideas
derived from biological genetics and genetic algorithms. It
is designed to provide an understanding of the relationship
between different pieces of context information. As part
of a context management layer, GRC is located between
communication and context processing in the application
stack as shown in Figure 5.</p>
      <p>On that level context information is just a data type with
some context specific attributes. The semantical meaning of
chair_5
chair_2
chair_3
chair_4
chair_0
chair_1 pen_0
pen_1
cup_4
cup_0
cup_1
cup_2
cup_3
context is evaluated one layer higher by the semantic context
processing. This separation of context management and
semantic context processing allows for the use of GRC without
being restricted in the choice of a semantic context model.
The intention of GRC is to provide a direct measure of the
degree of relationship of derived context data. This measure
together with the original contexts is then provided to the
context processing layer.</p>
    </sec>
    <sec id="sec-4">
      <title>3.1 Genomes for Contexts</title>
      <p>GRC establishes a measure of relationship by introducing a
context genome. The genome represents the identity of the
information carried by a context.</p>
      <p>When a basic context is generated, a new genome is
generated at the same time; when a context is derived from other
context information, the genomes of the parent contexts are
combined to build the genome of the newly created context.
An application that itself derives new contexts from other
source contexts, can compare the genetic finger prints of the
source contexts to determine whether these are independent
or somehow related. The semantic context processing layer
can then decide whether and how to process the contexts.
This can range from discarding contexts, to specifically
selecting the processing algorithm that suites the source data
best.</p>
      <p>When initially derived from sensor data, each context C is
associated with a randomly generated bit-vector of length l
representing its genome. The genome Γ is then sequenced
into n genes γ.</p>
      <p>ΓC := (γC,1, γC,2, · · · , γC,n)
(1)
By sequencing Γ ever resulting gene becomes associated to
a functional locus denoted by the index n. Other than in
genetic algorithms the locus of a gene is of functional
importance as it stores the relationship information. In classical
genetic algorithms the genome itself is functional data. Its
structure is directly mapped to the ”fitness” of the pattern
(solution) by an objective function.</p>
      <p>In GRC the genome itself is not functional in this sense. It
does not represent the fitness or quality of the context, but
its informational identity. Only by comparing the genetic
finger prints of contexts GRC produces a measure of
relationship that can be mapped to a notion of context quality.
The parameters influencing the performance of GRC are the
number of genes n in a genome and the size of the single
genes. The size of genes determines how many different
alleles exist. An allele is a value the gene can take. In a bit
vector genome the number of genes and alleles is associated
with the length of the genome by the following formula:
l =
n ln r
ln 2
(2)
Increasing n leads to a higher resolution of relationship,
whereas increasing r results in a lower systematic error rate.
More details on the influence of the choice of system
parameters is discussed in Section 4. In general, in contrast to
biological genomes, the context genome has to be very small;
e.g. even escherichia coli an intestinal bacterium has about
4.500 genes encoded in 4, 6 ∗ 106 base pairs each carrying 4
bit of information. This sums up to 2,19 MB of genetic
information in total. The bit-vector representing the genome
of a context typically carries 50-200 bytes. This reduction is
possible because, firstly the context genome does not have
to encode any functional information other than the
relationship of contexts, where the biological genome stores lots
more information. Secondly the context genome can be
designed to match the needs of context-aware environments in
terms of the number of generations of contexts that have to
be distinguishable.</p>
      <p>When a context is derived from other source contexts, but
not directly from sensor data, the source contexts are
already associated with a genome that can be used to derive
the genome of the newly produced context. This
recombination of existing genomes to form a new one is called
”crossover”.</p>
    </sec>
    <sec id="sec-5">
      <title>3.2 Probabilistic Multi Site Crossover</title>
      <p>
        To preserve the information of relationship a specially
designed probabilistic crossover method is applied. In genetic
algorithms a large variety of crossover methods is used [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">8,
9, 10</xref>
        ]. These are designed to preserve schemata. Schemata
are patterns in the genome used by genetic algorithms that
represent parts of the solutions to an optimization problem.
A detailed discussion of schemata can be found in [
        <xref ref-type="bibr" rid="ref9">11</xref>
        ]. The
important feature that make classical crossover methods
unsuitable for GRC is that schemata are independent from the
locus of the genes in the pattern. So, common crossover
operators used in genetic algorithms do not preserve the loci
of genes that GRC uses to represent the relationship
information.
      </p>
      <p>Probabilistic Multi Site Crossover (PMSC) is a new crossover
operator specially designed to preserve relationship
information in the genes of contexts. In Figure 6 on the next page,
the basic application of the PMSC-operator is shown. Two
parent genomes ΓC1 and ΓC2 are recombined into a child
genome ΓC3 . The different alleles of genes are represented
by capital letters. PMSC steps though ΓC1 and ΓC2 locus
by locus randomly choosing one parental gene at each
locus and marking it to be handed down to the child. Then
the ΓC3 is generated by copying the alleles of the marked
genes to the corresponding loci in the child genome. A
mutation operator is not applied as that would adulterate the
relationship information.</p>
      <p>The example above shows the most basic application of
the PMSC-operator. Context derivation algorithms can use
more than two input contexts, but will produce always one
output context at a time. PMSC can be used to recombine
the genomes of multiple source contexts. The maximum
number depends on the number of genes n, and how many
generations of contexts have to be distinguishable in the
environment.</p>
      <p>Alternatively to handing down the genes of each parent with
the same probability, PMSC allows to adjust the heredity
probability to represent the amount of information that is
handed down from a source context to a target context in
the derivation process. By that means GRC can be used also
to provide a direct measure for the amount of information
that is part of multiple contexts.</p>
    </sec>
    <sec id="sec-6">
      <title>3.3 Degree of Relationship</title>
      <p>To determine the degree of relationship between input
contexts, the consumer has to analyze their genomes. The
analysis of genomes has to be done pairwise, resulting in a
relation matrix in case of more than two source contexts.
First an indicator function fi is defined:
fi(γC1,i, γC2,i) =
½ 0
1
: γC1,i 6= γC2,i
: γC1,i = γC1,i
(3)
This function produces a 1 if the alleles of the genes in a
locus of the genomes of both contexts match and 0 otherwise.
The degree of relationship is then computed by summing up
the results of fi over all loci and dividing by the number of
genes:
rel(C1, C2) =
1 N</p>
      <p>X fn(γC1,n, γC2,n)
N n=1
(4)
This method allows for very fast and computational
inexpensive determination of the degree of relationship of
contexts. The resulting ratio can directly represent the ratio of
information both tested contexts have in common.
Two standard patterns of context dependencies will be used
to illustrate the functionality of GRC: Figure 7 shows a
simple pattern called ”single line of inheritance”.</p>
      <p>C1
C2
C3
C4</p>
      <p>C5</p>
      <p>In this pattern in every generation of context one new piece
of context information is added to an unbroken line of
context derivations. Figure 8 shows the values of rel(C1, Ci)
– the degree of relationship – in the lower curve. In this
simulation the genes of each parent were handed down with
a probability of 0.5, resembling the basic case of PMSC
introduced in Section 3.2.</p>
      <p>The upper curve shows the rel(Ci−1, Ci) values; the relation
of every offspring to its direct parent. The y-axis in the
figure represents the degree of relationship, the x-axis denotes
the number of generations. As expected the degree of
relationship drops from a starting value of approx. 0.5 with
every generation. The simulation was set up with a genome
length of 100 genes with 256 alleles.</p>
      <p>The second standard pattern – ”multi relation” – is shown
in Figure 9. Beginning with two independent contexts C1
and C2, new contexts are derived by mating a context with
one of its parents. This pattern can e.g. be produced by
cyclic processing of contexts.</p>
      <p>C1
C3
C5</p>
      <p>C2
C4
Figure 10 shows the simulation results for ”multi relation”,
again using a genome length of 100 genes with 256 alleles.
The y-axis represents the degree of relationship, the x-axis
denotes the number of generations.
The curve shows the rel(Ci−1, Ci) values, starting from i =
3. As expected the degree of relationship increases with
ever generation: From a value of approx. 0.5 in the first
generation to 0.75 in second and 0.875 in third.</p>
    </sec>
    <sec id="sec-7">
      <title>4. ANALYSIS AND EVALUATION</title>
      <p>In this section we analyze and evaluate the performance of
the GRC-system. Firstly the theoretical background of GRC
is reviewed, then first results of the application of GRC in
simulated and real-world environments are presented.</p>
    </sec>
    <sec id="sec-8">
      <title>4.1 Systematic Overestimation of the Degree of Relationship</title>
      <p>GRC is a probabilistic method to determine the relationship
of context in an ubiquitous computing environment. So it
does not give the real degree of relationship as an output,
but an estimate for that value. The quality of that estimate
depends on variable parameters that can be freely set up to
suite the demands of the environment GRC is used in, like
the number of genes and alleles.</p>
      <p>The degree of relationship provided by GRC is intended to
be used to filter contexts from processing if they do not meet
the requirements of context consuming application in terms
of independence from each other. That means, the
application developer can provide a threshold for the degree of
relationship of the input contexts an application can accept
1
0.9
0.8
0.7
0.6
)
r
re0.5
(
P
0.4
0.3
0.2
0.1
00
in different situations. Contexts that fall below this value
are processed, where context that exceed the threshold are
discarded.</p>
      <p>Due to its systems design GRC tends to sightly over
estimate the degree of relationship between contexts. The finite
size of the single genes – the number of alleles – can lead
incidentally, duplicate alleles in a locus when genomes are
produced from random bit-vectors. If the bit-vectors are
produced with an uniform distribution, the probability P (i)
to encounter i duplicate alleles in two random genomes with
n genes and r possible alleles is:</p>
      <p>P (i) =
µ 1 ¶i µ
r</p>
      <p>The probability to encounter an error caused by duplicates
in a derivation process that involves m parent genomes, is
bounded by the probability P (err). P (err) describes that
from beginning – after the random selection of genomes –
at least two of the parents have the same allele in one
locus. This is complementary to the probability of none of the
parents having the same allele in one locus:</p>
      <p>m−1
P (err) = 1 − Y
j=0
r − j
r
= 1 −</p>
      <p>r!
rm(r − m)!
(6)
Figure 11 shows the P (err) for a setting with 256 alleles
and up to 100 parent contexts involved in one single
derivation process. In Figure 12 the same curve is shown for up
to 10 parents, which is much more relevant in real-world
environments.</p>
      <p>Independent from the derivation process, the maximum
estimation error would occur if all duplicates would lead to an
error. This is not very likely as duplicate errors can vanish
P(err) with 256 different Alleles
1</p>
      <p>
        Analyzes of application scenarios [
        <xref ref-type="bibr" rid="ref4 ref5">7, 6</xref>
        ] have produced a set
of standard patterns of context dependencies showing that
even in large-scale environments the number of different seed
contexts that are involved in one single derivation process is
very limited. The above estimator models the most common
derivation process, while Formula 6 provides the basis for an
estimator for the upper bound of the error in more complex
processes by exchanging ασ for P (err) in Formula 8.
      </p>
    </sec>
    <sec id="sec-9">
      <title>4.2 System Parameters</title>
      <p>The first step in applying GRC in a real environment is to
find suitable system parameters to set up the system. For
our evaluation, these parameters were derived from
extensive simulation of the GRC-system in MATLAB. As basis
for the simulations we used our results from the application
analyzes. Table 1 shows the variances and means of the
degree of relationship for a simulated derivation process of
r = 32
r = 256
r = 1024
one generation and two parents. The number of simulated
derivation processes was 10000.</p>
      <p>From the simulation we derived a setup with n = 100 and
r = 256 as the best compromise of performance, memory
consumption and needed computing power. This parameter
set we used to setup several environment-simulation using
our Java Context Processing and Communication Simulator
”context sim”. We also used it to perform first real world
tests of GRC in the AwareOffice setting.</p>
    </sec>
    <sec id="sec-10">
      <title>4.3 Results</title>
      <p>First results from the simulations of larger ubiquitous
computing environments are very promising. The simulation in
context sim resembles the AwareOffice setting to yield
results comparable to those from the real world trail. Figure
3 shows the processing graph of the simulation. As explained
in Section 2, this setting contains two problem scenarios –
splitting and multiplication and cyclic usage. 28 artifacts
were simulated – 5 cups, 6 chairs, three tables , three
windows, two pens, one sponge, one digital camera, one
doorplate, on air condition and 5 PDAs. The simulations ran for
1000 activity steps of all artifacts each.</p>
      <p>
        In the first simulation runs the GRC-system was switch off to
produce reference results representing a common ubiquitous
computing environment with no special quality management
system in operation. Based on a survey on common
achievable context recognition rates [
        <xref ref-type="bibr" rid="ref10">12</xref>
        ] the mean recognition rate
for all context was set to a comparably high value of 0.9 with
a standard deviation of 0.05. The statistics for one of the
PDA devices after the simulation run read as follows:
-------------------------------------------Total active steps: 1000
Total number of contexts consumed: 3348
Total number of contexts produced: 1436
Overall recognition rate: 27.27%
Mean relationship of consumed Cs: 0.6698
Var of relationship of consumed Cs: 0.22008
-------------------------------------------The first line of the print shows the number of active
simulation steps for that artifact. The second and third line show
the number of contexts consumed respectively produced by
the PDA. The device received more than 3 contexts every
simulation step, while it needed an average of 2.33 input
contexts to derive an output context. This indicates the very
high number of contexts communicated in total in the
environment during only 1000 active steps. The overall context
recognition rate of the PDA is computed from the initial
value of 90% for all first level contexts. For the following
derivation process, a mean recognition rate of 90% is
assumed as well (see above). Splitting of contexts and the
possible cycle in the meeting context lead to a recognition
rate of only 27.27%, which would not be acceptable for a
user. As the PDA devices only produce two different
contexts, this is worse than blind guessing the context. The
errors and decrease in context quality are also reflected by
the high degree of relationship of the contexts the PDA has
consumed. The value of over 0.6 indicates that most of the
contexts had more than half of their information in
common. The high variance of the relationship points to a very
unstable system in terms of information independence and
context quality.
      </p>
      <p>In a second run of simulations GRC was activated. All other
parameters of the simulation remained unchanged. The
results show significant differences to those produces without
GRC in operation:
-------------------------------------------Total active steps: 1000
Total number of contexts consumed: 1395
Total number of contexts produced: 465
Overall recognition rate: 72.92%
Mean relationship of consumed Cs: 0.0039
Var of relationship of consumed Cs: 3.77E-5
--------------------------------------------In this simulation run the GRC-system was set to filter out
all contexts that had a degree of relationship of greater than
0.5. This results in a significantly reduced number of
communicated contexts in the environment. Only 1395 context
were consumed and 465 were produced, averaging to 3
consumed input contexts per output context. This means a
reduction of the load on the communication channel of about
60%. At the same time the recognition rate of the PDA
increased by over 45% to 72.92%. This is due to the filtering
of highly interdependent contexts that led to errors in
context recognition in the first simulation setup without GRC
activated. The increase in context quality is also reflected
by the low mean degree of relationship of the contexts that
were processed by the device.</p>
      <p>
        As the environment for the real world experiment we used
the AwareOffice setting at TecO. In the AwareOffice
artifacts are implemented on basis of the Particle Computer
platform [
        <xref ref-type="bibr" rid="ref11">13</xref>
        ]. The standard context processing and
communication stack of the Particle Computer – ConCom [
        <xref ref-type="bibr" rid="ref12">14</xref>
        ] – was
added the GRC functionality. The AwareOffice environment
at the moment host 27 active artifacts: 2 AwarePens, an
augmented whiteboard, one digital camera, 12 active chairs,
4 tables, 6 windows and a doorplate. First preliminary
testing results support the findings from the simulation. The
gain in context quality still was not as significant due to the
fact that the real AwareOffice actively tries to cut cycles by
means of timestamp and address filtering, both not suitable
approaches in large heterogeneous settings.
      </p>
    </sec>
    <sec id="sec-11">
      <title>5. CONCLUSION AND OUTLOOK</title>
      <p>With GRC a lightweight context management system
becomes available that is able to solve some of the major
problems of context communication and processing connected
to context dependencies in large heterogeneous ubiquitous
computing environments. As more personalized
contextaware services become available the number of contexts will
increase further, so efficient context management techniques
are needed. The processing graphs generated by small
simulations (see Figure 3 and 4) give an impression of the
complexity in unsupervised heterogeneous and highly dynamic
settings.</p>
      <p>GRC promises to solve problems that are connected to
interdependencies of context information. It can provide
semantic context models with information on the degree of
relationship of contexts, allowing for filtering highly related
information, or selecting optimal algorithms for processing
of contexts based on their level of interdependence.
The theoretical analysis of GRC in Section 4 shows the
soundness and applicability of the presented system. The
results presented in this paper show the high potential of the
method in building integrated context management systems
ensuring the quality of context in ubiquitous computing
environments. The context recognition rate of a simulated
artifact could be increased by up to 45%. While the
communication overhead caused by low quality contexts dropped
by 60% at the same time.</p>
      <p>
        The next step in research on this topic will be to integrate
GRC with other context quality algorithms, that focus on
problems GRC alone can not solve. This will be in the first
place temporal issues of context aging, spatial constrains
induced by the locality of context and context reliability to
provide a measure of the semantic quality of context
information [
        <xref ref-type="bibr" rid="ref13">15</xref>
        ].
      </p>
    </sec>
    <sec id="sec-12">
      <title>6. REFERENCES</title>
      <p>[1] M. Weiser. The computer for the 21st century.</p>
      <p>Scientific American, 265 (3):66–75, 1991.
[2] Yoshinori Isoda, Shoji Kurakake, and Kazuo Imai.</p>
      <p>Context-aware computing system for heterogeneous
applications. In Proceedings of the First Internaltional
Workshop on Personalized Context Modeling and
Management for UbiComp Applications, 2005.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Tobias</given-names>
            <surname>Zimmer</surname>
          </string-name>
          .
          <article-title>Towards a Better Understanding of Context Attributes</article-title>
          .
          <source>In Proceedings of PerCom 2004</source>
          , pages
          <fpage>23</fpage>
          -
          <lpage>28</lpage>
          , Orlando, USA, March
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Albrecht</given-names>
            <surname>Schmidt. Ubiquitous</surname>
          </string-name>
          Computing - Computing in Context.
          <source>PhD thesis</source>
          , Lancaster University,
          <year>November 2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Saul</given-names>
            <surname>Greenberg</surname>
          </string-name>
          .
          <article-title>Context as a Dynamic Construct</article-title>
          . Human-Computer Interaction,
          <volume>16</volume>
          (
          <issue>2-4</issue>
          ):
          <fpage>257</fpage>
          -
          <lpage>268</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Tobias</given-names>
            <surname>Zimmer</surname>
          </string-name>
          and
          <string-name>
            <given-names>Michael</given-names>
            <surname>Beigl</surname>
          </string-name>
          .
          <article-title>AwareOffice: Integrating Modular Context-Aware Applications</article-title>
          .
          <source>In Proceedings of the 6th International Workshop on Smart Appliances and Wearable Computing (IWSAWC)</source>
          . IEEE Computer Society Press,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Vlad</given-names>
            <surname>Coroama</surname>
          </string-name>
          , J¨org H¨ahner, Matthias Handy, Patricia Rudolph-Kuhn, Carsten Magerkurth, Ju¨rgen Mu¨ller, Moritz Strasser, and
          <string-name>
            <given-names>Tobias</given-names>
            <surname>Zimmer</surname>
          </string-name>
          .
          <article-title>Leben in einer smarten umgebung - ubiquitous computing: Szenarien und auswirkungen</article-title>
          .
          <source>Gottlieb</source>
          Daimler- und
          <string-name>
            <surname>Karl</surname>
          </string-name>
          Benz-Stiftung,
          <year>12 2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [8]
          <string-name>
            <surname>John</surname>
            <given-names>H. Holland.</given-names>
          </string-name>
          <article-title>Adaptation in natural and artificial systems</article-title>
          . MIT Press, Cambridge, MA, USA,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>K</given-names>
            <surname>. A. De Jong</surname>
          </string-name>
          .
          <article-title>An Analysis of the Behavior of a Class of Genetic Adaptive systems</article-title>
          .
          <source>PhD thesis</source>
          , University of Michigan,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>William</given-names>
            <surname>Spears</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Anand</surname>
          </string-name>
          .
          <article-title>A study of crossover operators in genetic programming</article-title>
          .
          <source>In Proceedings of the Sixth International Symposium on Methologies for Intelligent Systems</source>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [11]
          <string-name>
            <surname>David</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Goldberg</surname>
          </string-name>
          . Genetic Algorithms in Search, Optimazation, and
          <article-title>Machine Learning</article-title>
          .
          <source>Addison Wesley</source>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Ring</surname>
          </string-name>
          .
          <article-title>Performanceanalyse kontextsensitiver anwendungen</article-title>
          .
          <source>Technical report, Universty of Karlsruhe, Germany, ISSN 1432-7864</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [13] The Particle Computer Company. Website, accessed: 01/
          <year>2006</year>
          . http://www.particle-computer.de.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Albert</surname>
            <given-names>Krohn</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Beigl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Christian</given-names>
            <surname>Decker</surname>
          </string-name>
          , and Tobias Zimmer.
          <article-title>ConCom - A language and Protocol for Communication of Context</article-title>
          .
          <source>ISSN 1432-7864</source>
          <year>2004</year>
          /19, University of Karlsruhe,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Tobias</given-names>
            <surname>Zimmer</surname>
          </string-name>
          .
          <article-title>Qoc: Quality of context - improving the performance of context-aware applications</article-title>
          .
          <source>In Adjunced Proceedings of Pervasive</source>
          <year>2006</year>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>