<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Taxonomy of Collaborative Context-Aware Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>As"ad Salkham</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Raymond Cunningham</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aline Senart</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vinny Cahill</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Distributed Systems Group Department of Computer Science Trinity College Dublin</institution>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Salkhama</institution>
          ,
          <addr-line>Raymond.Cunningham, Aline.Senart, Vinny.Cahill</addr-line>
        </aff>
      </contrib-group>
      <fpage>899</fpage>
      <lpage>911</lpage>
      <abstract>
        <p>Context awareness is a vital element in pervasive and ubiquitous systems. While most existing research has focused on designing context-aware systems to integrate into the environment, less attention has been placed on the interoperability among the entities comprising such systems. In this paper, we consider how the components of a context-aware system can collaborate to achieve a common goal. We provide a taxonomy of such Collaborative Context Awareness (CCA) based on three axis, i.e., goal, approaches and means. We also discuss a number of context-aware systems from different domains, i.e., augmented artefacts, robotics and sensor(/actuator) networks that exhibit some form of collaboration. Finally, we classify the different studied systems according to our taxonomy.</p>
      </abstract>
      <kwd-group>
        <kwd />
        <kwd>taxonomy</kwd>
        <kwd>collaboration</kwd>
        <kwd>Collaborative Context Awareness</kwd>
        <kwd>context awareness</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Context-aware systems are an emerging genre of computer systems that help add some
forms of intelligence to our surroundings. It is well-established that context-aware
(sentient) systems should address three basic requirements, i.e., sensing, inference and
actuation [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Furthermore, a number of ongoing research efforts have been targeting the
definition and classification of context-aware systems; the most recent is a survey on
context-aware systems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] that suggests a set of common design principles and assesses
a set of available context-aware middlewares and frameworks against those principles.
In this paper, we do not provide a classification nor a survey of context-aware systems.
Readers are encouraged to refer to [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5 ref6">2–6</xref>
        ] for such systems. Instead, we focus on
collaboration and its intrinsic relation to context awareness.
      </p>
      <p>
        To our knowledge, there is no existing research providing a taxonomy or a concrete
definition for collaborative context-aware systems that range from small augmented
The authors are grateful to Science Foundation Ireland for their support of the work described
in this paper under Investigator award 02/IN1/I250 between 2003 and 2007.
artefacts to large-scale and highly distributed sensor(/actuator) networks. Ma¨ntyja¨rvi
et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] emphasise reliability in what they refer to as collaborative context
recognition. Additionally, they presume that context-aware devices within a certain area have
common views of the context and can agree on a time- and space- dependent
collaborative context through short-range communication. They describe collaborative context
as the “summary of the situation of the other devices in the local range corrected by
the local context” providing an update strategy for these devices and associated trigger
conditions. In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], a context-aware communication platform to support smart objects
is described. The platform emphasises the importance of a distributed tuplespace-based
communication model to support inter-object collaboration. The model allows for smart
objects within a vicinity to share a distributed tuplespace, broadcast their data and
contribute with an equal amount of memory.
      </p>
      <p>
        An interesting taxonomy for coordination in Multi-Robot Systems (MRS) was
presented by Farinelli in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The MRS taxonomy is divided into four levels namely,
cooperation, knowledge, coordination and organisation. However, this taxonomy considers
only cooperative systems and defines them as those systems composed of “robots that
operate together to perform some global task” [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for instance, the focus is solely on collaborative context recognition
and how reliable and consistent the outcome of this recognition is. Neither actuation
nor decision making is taken into account and only handheld devices are considered.
The MRS taxonomy [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] does not address context awareness but rather cooperation of
robots. The robots are said to be aware if they could have some knowledge of their
team members [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], this view is not well-defined and lacks a definition of the
relevant knowledge nature. Furthermore, the MRS taxonomy only considers a coordination
protocol-based/protocol-free decision making process and does not include data and
context sharing.
      </p>
      <p>We define a collaborative context-aware system as a system that comprises a group
of entities, capable of sensing, inferring, and actuating that communicate in order to
achieve a common goal. We have identified that collaboration among context-aware
entities may not only be based on communicating contextual information but also sensed
and fused data in addition to possible next actions to perform. Such communication
supports efficient collaboration to occur as a result of more precise inference, decision
making and awareness. Moreover, collaborating components can follow a
consensusbased or a consensus-free approach in which negotiated decisions or local decisions are
taken respectively. Delegation may also be used to achieve optimal behaviour in
collaborative context-aware systems. We see delegation as the ability of entities to pass tasks
to neighbours depending on their estimation of the best option to achieve a specific
common goal. Our contribution is presented in the form of a taxonomy for
Collaborative Context Awareness (CCA) that encompasses the characteristics we highlighted
above.</p>
      <p>The remainder of the paper is structured as follows: Section 2 describes the
contextaware systems from different domains that we studied while emphasising their
components collaboration. Section 3 presents our taxonomy for CCA. In Section 4, we provide
a classification of the studied systems according to our taxonomy. We conclude in
Section 5.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Studied systems</title>
      <p>
        Our interest in the following systems stems from the fact that they directly address
collaboration while also exhibiting context awareness. However, the definition of
collaboration in different systems may result in a philosophical debate. Some researchers
may prefer to divide collaboration into cooperative (i.e., negotiated decision making
through communication) and coordinated (i.e., local decision making through
communication) solutions, as in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for instance. Others consider coordination as cooperation
in which an agent performs actions while taking into account the actions performed
by other agents [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. We tend to see collaboration as a synonym for cooperation and
we do not focus on the differences among collaboration, cooperation and coordination.
Nevertheless, our CCA taxonomy is flexible enough to encompass various views and
opinions. In this section, we span several domains that are the most representative of
CCA. In parallel, we present the systems we studied and provide an analysis of their
characteristics, in particular their collaboration models.
2.1
      </p>
      <sec id="sec-2-1">
        <title>Augmented artefacts</title>
        <p>
          One of the best-known projects in the augmented artefacts domain is Smart-Its [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]
which was part of the EU-funded Disappearing Computer Initiative. The idea behind
the project is to develop smart small-scale embedded devices known also as Smart-Its
that are able to sense, actuate, compute and communicate. These smart devices are
introduced to help develop and study collective context awareness and increase the
widespread deployment of ubiquitous computing systems [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Furthermore, the Smart-Its
project encompasses different everyday artefacts (e.g., cups), enabling greater and more
user-friendly perception of the surrounding environment.
        </p>
        <p>
          Smart-Its are generic and exhibit a modular design; in a single Smart-It, an
application-specific processing module forms a bridge between the communication
module and the different sensing and actuation modules. In addition, the
intercommunication of Smart-Its is based on a stateless peer-to-peer protocol where local
broadcast of context and other information within a certain proximity is supported,
(i.e., a direct communication scheme) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] . Application-dependent fusion and
inference techniques in a Smart-It are likely to occur in the processing module while a lower
level of fusion could be carried out in the sensor module(s). Also, the generic nature of
Smart-Its enables them to form different kinds of systems, e.g., common goal-oriented
systems that might implement a consensus-based and/or a consensus-free approach.
        </p>
        <p>
          A spin-off of Smart-Its was Smart-Its Friends [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ] that emerged from applying
the concept of context proximity to connect Smart-Its. This concept enables
SmartIt devices within range and experiencing similar situations or conditions (e.g., same
shaking pattern), to be considered near to each other in context (i.e., have a common
context perception) and to be connected or friends. Upon the reception of broadcast
data and the ID of the source, a Smart-It tests the data against a predefined threshold
and declares the source as a friend if the test is passed. Subsequently, a Smart-It will
remain identified as a friend regardless of the connection breaking while continuing to
respect the friendship expiry constraints [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          Another related project is Cooperative Artefacts [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. The concept is based on an
infrastructure-less approach to allow easier deployment of these artefacts when
cooperatively assessing specific situations in the environment. The cooperation among
artefacts is solely based on sharing their knowledge through protocol-based direct
communication. The knowledge base in each artefact comprises three types of knowledge;
domain knowledge, observational knowledge and inferred knowledge. In an artefact,
facts are defined as “the foundation for any decision-making and action-taking within
the artefact”, while rules allow inference of advanced or upgraded knowledge based on
facts and other rules [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Key knowledge is inferred knowledge that is the knowledge
inferred from previous facts. These facts are based on the previously mentioned three
local types of knowledge and/or the knowledge shared by cooperating artefacts. It is
worth mentioning that there are also actuation rules that are responsible for triggering
a corresponding action. In one application [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], a group of chemical containers were
modelled as Cooperative Artefacts equipped with infrared light sensors and ultrasonic
sensors to ensure that all artefacts (containers) are within an approved safety area and
that certain artefacts stay within an acceptable distance of each other. Actuation is
simplified in this application to the control of LEDs to raise an alarm if safety constraints
are not met. Clearly, each Cooperative Artefact is capable of sensing, perception
(fusion), inference, actuation, direct communication and sharing of the knowledge.
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], Ricci et al. present an idea that is partially inspired from the stigmergic form
of communication in nature [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. This idea is based on Coordination Artifacts that are
defined as “entities used to instrument the environment so as to fruitfully support
cooperative and social activities of agent ensembles” [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], for instance, street semaphores,
blackboards and maps. Coordination is also seen in its very general concept as the
“management of dependencies among separate activities” [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ]. On the other hand, no
distinction is made between coordination and cooperation. In contrast, the distinction
is made between what is referred to as subjective and objective coordination. In the
subjective form, coordination is perceived as an individual activity where the
environment is not part of the coordination and the coordination aims to achieve a subjective
goal through direct inter-entity communication. Objective coordination uses mediators
that are part of the environment to decouple communication between entities to enable
these entities to achieve a common or global goal. TuCSoN [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] is an open source
coordination infrastructure that uses the Coordination Artifacts concept. Examples of these
artefacts in TuCSoN are, mailboxes and blackboards for communication, tuple centre
(i.e., programmable tuple spaces) for knowledge mediation and resource sharing. We
can see these artefacts as communication mediators for a number of agents or entities
which as a whole can form a context-aware system.
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Robotics</title>
        <p>The robotics domain is a very important source of inspiration for collaboration schemes
and context-aware systems. In this section, we discuss a number of robotic systems that
emphasise their collaboration aspects.</p>
        <p>
          In the cooperative sensing field, Grocholsky et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] propose a scheme for
anonymous cooperation in robotic sensor networks. The scheme involves a decentralised
architecture that enables entities to globally and anonymously cooperate in sensing
without the need for global knowledge. The idea is based on a Decentralised Data
Fusion (DDF) algorithm. The DDF is seen as a decentralised alternative approach to the
typically centralised Kalman filtering and Bayesian estimation techniques. It provides
means for fusing information in a distributed network of sensors. Moreover, a DDF
node depends on data gathered from a group of sensors to generate estimates of some
time varying state that may then be propagated. Aggregation of information in a single
node is the fusion of local sensor data, local predictions and the directly communicated
information (estimates) from other nodes. Based on this fused information, a
subsequent decision is taken locally by the node. Actuation is not clearly described in this
architecture but can be presumed to exist since its robotic nature involves mobility that
is likely to be controlled by inferred actions.
        </p>
        <p>
          The authors break what they describe as Anonymous Collaborative Decision Making
into a coordinated and cooperative solutions providing definitions for both. They
perceive a cooperative solution “to be a predictive jointly optimal negotiated group
decision in the sense of a Nash equilibrium1” [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. On the other hand, “in coordinated
solutions there is no mechanism for this negotiated outcome. Decision makers act locally
but exchange information that may influence each others’ subsequent decisions” [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
Cooperative and coordinated solutions can be seen here as a consensus-based and a
consensus-free solutions respectively.
        </p>
        <p>
          Millibots [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] is a research project at the Robotics Institute in Carnegie
Mellon University that, as the name conveys, has designed “a team of heterogeneous
centimetre-scale robots which coordinate to provide real-time surveillance and
reconnaissance” [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. Millibots exhibit a modular design in sensing, processing (including
inference) and mobility that implies actuation. The sensor modules that a team of
Millibots can utilise may include short/long range sonar, directional infrared and vision.
Subsequently, each team member specialises based on its sensor module(s). Hence,
collaboration is seen as the consequence of distributing functionality and resources (i.e.,
specialised sensing and processing). This collaboration is defined as “the explicit
exchange of information between members of a team”, this is clearly done using direct
communication. Also, collaborative sensing is presented as being “where the sensing
process itself is distributed between one or more robots”; this view is believed to be a
consensus-free one that does not involve negotiated decisions. One of the interesting
applications of the Millibots team is collaborative mapping, i.e., team members are able
to collaborate to collect and fuse sensory information (based on a Bayesian technique)
in order to create a map of the encompassing area.
        </p>
        <p>
          As for schemes inspired from nature, swarming has inspired the creation of many
systems. For instance, Parunak et al. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] provide their view of collaborative sensing
through swarming of multiple Unmanned Aerial Vehicles (UAV) noticeably for
military imaging applications. They define different types of coordination that can occur
within a group of UAVs: spatial, temporal, and team coordination. Spatial coordination
is concerned with efficiently distributing UAVs over an observed area while temporal
1 Named after Nobel Laureate (in economics) and mathematician John Nash. We quote Roger
McCain’s definition: “If there is a set of strategies with the property that no player can benefit
by changing her strategy while the other players keep their strategies unchanged, then that set
of strategies and the corresponding payoffs constitute the Nash Equilibrium”.
coordination ensures the timeliness of all UAVs’ behaviour and information exchange.
Team coordination is basically inspired from natural systems (e.g., colonies of social
insects), and aims at optimising distribution of roles among UAVs and managing their
formation, maintenance and dispersion.
        </p>
        <p>
          There are three principles and techniques that are needed to achieve collaborative
sensing in this context. First is team and role coordination which comprises dynamic entity
classification and dynamic role activation. The dynamic classification enables adding
or removing roles to or from an entity while dynamic activation enables changing roles
over time within an entity. Second is local optimisation where each UAV is assumed
to make local decisions (i.e., consensus-free), in order to accomplish the overall
designated mission on time, i.e., UAVs are capable of reconfiguration based on their
perceived quality in which the goal is achieved, for example the image quality. Third are
the techniques inspired from natural systems, for instance, stigmergy [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Basically,
the scheme described employs the idea of digital pheromones from which maps are
formed to enable real-time path planning.
2.3
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>Sensor(/actuator) networks</title>
        <p>
          Sensor networks are a rich domain for studying and experimenting with new
collaboration models for context awareness. This domain becomes more challenging when
actuators are also involved, such as in [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Sensor(/actuator) networks applications span
many fields for instance, surveillance/tracking systems usually for the military [
          <xref ref-type="bibr" rid="ref22 ref23">22, 23</xref>
          ]
and Intelligent Transportation Systems (ITS).
        </p>
        <p>
          Melodia et al. [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] provides a framework in which a sensor-actor coordination
model in Wireless Sensor and Actor Networks (WSAN) is specified. This model is
based on event-driven clustering of sensors and actors, i.e., a cluster is created
on-thefly after being triggered by an event. Besides this model, the framework encompasses an
actor-actor coordination model. The sensor-actor coordination occurs whilst
establishing data paths between sensors and actors. On the other hand, actor-actor coordination
occurs when actors coordinate to make an optimal decision to perform the action, i.e.,
consensus-based. Furthermore, a cluster emerges only in a single event area where a
group of sensors send their data to the same actor/collector that, as part of its role,
centrally fuses the gathered data. A notion of reliability is also introduced in terms of
reliable packets. This notion depends on a latency bound and a reliability threshold [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
Moreover, sensor-actor coordination is based on a distributed protocol of localised
routing decisions, i.e., consensus-free. The protocol assumes that the sensor is aware of its
position, neighbours’ positions, and actors’ positions. Each sensor node is also governed
by a multi-state protocol for optimal operation, i.e., energy consumption, reliability, etc.
Concerning actor-actor coordination, a localised auction protocol is proposed. The
protocol is inspired from a real-time auction protocol that defines the behaviour of actors
participating in transactions as buyers and sellers. The protocol is consensus-based and
designed to deal with selecting the best actor in an overlapping area of actors. This
model implies some form of delegation in which the most suitable actor is assigned the
task to perform. Furthermore, sensors and actors communicate directly among
themselves in both models.
        </p>
        <p>
          The CoSense project [
          <xref ref-type="bibr" rid="ref22 ref23">22, 23</xref>
          ] developed in the Palo-Alto Research Centre (PARC),
has aimed at providing a collaborative sensing scheme for target recognition and
condition monitoring. Moreover, the focus is on energy-constrained environments filled
with low-observable targets. An energy-efficient sensor collaboration is presented in
[
          <xref ref-type="bibr" rid="ref22 ref23">22, 23</xref>
          ]. This collaboration is information-driven, i.e., which dynamically determines
who should sense, what to sense and to whom the sensed information must be passed.
An assumption is made that each sensor has its communication and local sensing range.
Furthermore, a sensor node is assumed to have local estimation capabilities of the cost
of sensing, processing, and direct data communication to another node in terms of its
power usage. The Information-Driven approach subsequently enables each sensor node
to efficiently manage its communication and process resources. This entails that sensor
selection is based on a local decision thus exhibiting a consensus-free approach. Also,
a leader node holds the current belief and receives all passed information for fusion for
a certain period of time. This leader node may then act as a relay station. Otherwise, the
belief can travel through the network where the leadership is changed dynamically.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Collaborative Context Awareness (CCA) taxonomy</title>
      <p>
        Our methodology in designing the CCA taxonomy is a result of inspiration from
B. Randell’s [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] and J.C. Laprie’s work in defining Dependability. In addition,
Roy Sterritt’s method of defining Autonomic Computing [
        <xref ref-type="bibr" rid="ref25 ref26">25, 26</xref>
        ] was an additional
influence. We designed our CCA taxonomy based on the commonalities of different
context-aware systems that emphasised collaboration. The structure of the CCA
taxonomy illustrated in Figure 1 is based on three axis: Goal, Approaches and Means:
Goal : every collaborative (context-aware) system/sub-system aims at achieving
a common goal, e.g., to optimally accomplish the assigned mission through some
form of collaboration, cooperation and/or coordination among the comprising entities.
We believe this must be achieved through information exchange and possibly some
delegation techniques.
      </p>
      <p>Approaches : depending on the application requirements, collaboration may
follow a consensus-free and/or a consensus-based approach.</p>
      <p>Consensus-free — entities may need to take local decisons hence they do not negotiate
a common decision and communicate different information, i.e., sensory data, fused
information, context/sub context and next action(s) in order to aid this local decision
making.</p>
      <p>Consensus-based — a system/sub-system may need negotiated outcomes, consequently
entities are compelled to communicate different information in order to take a common
decision.</p>
      <p>Means : we indentified seven means that may be used for collaborative
contextaware systems to function.</p>
      <p>Sensing — observing the environment typically entails the ability to receive different
kinds of stimulus.</p>
      <p>Fusion — the usual presence of numerous data sources in context-aware systems
justifies the need to gather different low level pieces of data and information and the
ability to build more reliable higher levels of knowledge.</p>
      <p>Actuation — adjusting the system behaviour needs the realisation of inferred action(s),
whether physically applied on the environment or not.</p>
      <p>Inference — knowledge is a very crucial element in context-aware systems; hence the
ability to build, update and reason about this knowledge is vital. In addition, deciding
upon needed action(s) is important for context-aware systems’ adaptability.
Communication — components/entities must exhibit a form of communication, i.e.,
indirect and/or direct, in order to realise their collaboration.</p>
      <p>Direct — depending on the system architecture and application; entities may
communicate using a dedicated channel for peer-to-peer, multicast or broadcast communication.
Indirect — stigmergic communication inspired from nature may help more efficient
collaboration through the ability of communicating by changing and then sensing the
shared environment.</p>
      <p>Delegation — optimality is normally an important characteristic of context-aware
systems, hence an entity capable of estimating for instance, computational and/or
power needs, could decide that it is more efficient to delegate a task to a neighbouring
entity if found more capable of handling it.</p>
      <p>UMICS'06</p>
      <p>The CCA taxonomy is flexible enough to encompass a large number of
collaborative context-aware systems. For instance, a system of UAVs may be classified from the
taxonomy as a consensus-free collaborative context-aware system that exhibits
sensing (vision), actuation (manoeuvring), direct communication, fusion and inference. The
goal of such a system could be drawing a map of a certain terrain with the best quality
possible. In the next section, we classify the studied systems against the CCA taxonomy.</p>
    </sec>
    <sec id="sec-4">
      <title>4 Evaluation</title>
      <p>We provide in Table 1 the evaluation resulting from applying the CCA taxonomy
to the studied systems. We believe that the relevant systems share the same goal of
accomplishing an application-specific mission, hence we omitted the goal criterion
from the evaluation table. We discuss each system and justify the relevant classification.</p>
      <sec id="sec-4-1">
        <title>Smart-Its</title>
        <p>
          Smart-Its exhibit a very generic design and application-specific behaviour. A system
comprising Smart-Its could follow a consensus-free or a consensus-based approach
or both (on different levels). A Smart-It explicitly provides dedicated sensing and
actuation modules along with a processing module that provides inference and
possibly fusion (depending on the implementation). The communication scheme among
Smart-Its is direct at the moment. Indirect communication would also be possible if
the application design benefits from stigmergy [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], i.e., Smart-Its understand/interpret
each others’ physical actuation on the encompassing environment.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Smart-Its Friends</title>
        <p>Smart-Its Friends are identical to Smart-Its but they exhibit a specific form of
connection establishing.</p>
      </sec>
      <sec id="sec-4-3">
        <title>Cooperative Artefacts</title>
        <p>
          These artefacts cooperate by sharing a knowledge base through a query/response
technique. They follow a consensus-free approach where they take decisions locally.
The Cooperative Artefacts structure clearly exhibits sensing, actuation, fusion, and
inference. Although, their current application as an alert system for storing hazardous
chemical material containers shows limited actuation, i.e., switching LEDs on and off.
As for fusion, it can be seen in the dedicated perception component that can produce
location and proximity information for instance. A simple Prolog interpreter-like
inference engine is also provided [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Communication is direct and through a short
range wireless link.
        </p>
      </sec>
      <sec id="sec-4-4">
        <title>Anonymous collaborative decision making</title>
        <p>
          This scheme allows a consensus-free and a consensus-based approach through its
coordinated and cooperated solutions respectively [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Sensing and fusion are exhibited
in the Decentralised Data Fusion (DDF) technique. Furthermore, nodes/robotic sensors
communicate directly and propagate their information in the whole network.
        </p>
      </sec>
      <sec id="sec-4-5">
        <title>Millibots</title>
        <p>
          Millibots directly communicate diverse sensory data depending on the type of sensors
each Millibot is equipped with. They also do not negotiate decisions and hence follow
a consensus-free approach. Each Millibot is responsible for local data fusion and
inference depending on its view and the communicated information from other team
members. Furthermore, Millibots typically exhibit actuation through mobility and
the ability to command special type of adjustable sensors, i.e., a servo motor based
Directional Infrared Detector Module (DIDM) [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-6">
        <title>UAV collaborative sensing</title>
        <p>
          The communication scheme among the collaborating UAVs is believed to be direct
despite the inspiration from swarming intelligence. This is because each UAV has a
view based on its map of digital pheromones however, it wirelessly receives pheromone
information from other UAVs in range and alters its map accordingly. Furthermore, a
UAV is a local decision maker that is capable of sensing, e.g., imaging and actuating
through manoeuvring. Inference in UAVs can be seen in a fitness evaluation procedure,
i.e., quality of imaging [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
      </sec>
      <sec id="sec-4-7">
        <title>WSAN coordination framework</title>
        <p>
          The means of communication in both the sensor-actor and actor-actor models are
direct. The framework exhibits sensing and actuation within the event-driven clusters.
Furthermore, each sensor performs intermediate local data fusion and take local
decisions while forwarding event information to the designated actor in the cluster
that has emerged. The actor gathers, processes and reconstructs event data while a
consensus has to be reached afterwards among actors within the action/event area to
select the best actor suitable to perform the action. The actor-actor consensus takes
action completion time and/or energy consumption into account [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. Interesting to
note is that some form of delegation is present in the actor-actor coordination model.
        </p>
      </sec>
      <sec id="sec-4-8">
        <title>CoSense</title>
        <p>Sensors communicate directly and there is no means of actuation in the described
research. Moreover, indirect communication is not possible since there is no actuation.
The data fusion process centralised in a leader node. Also, each sensor node takes
local routing decisions based on the cost of sensing, communication and processing
estimations. Inference is implicit in the sensor selection process for specific target
surveillance.</p>
        <p>From the evaluation above we see that the CCA taxonomy succeeds to classify
the diverse number of context-aware systems from different domains. We also see that
certain systems provide for consensus-free and consensus-based approaches
simultaneously or for a single one at a time. Most of the systems exhibit sensing, fusion,
actuation and inference at different levels since these characteristics are normally intrinsic to
context-aware systems. Communication is typically direct; this is justified by the
possible difficulties in adopting a fully stigmergic communication paradigm. Also, a form of
delegation is present in one system, namely, the WSAN coordination framework; this
could be a good motive to start adopting forms of delegation in other context-aware
systems to improve overall system performance.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>In this paper, we provided a taxonomy for Collaborative Context Awareness and
investigated a number of context-aware systems and projects that focussed on collaboration.
Based on the evaluation of these systems against the CCA taxonomy; we believe that
the taxonomy is sufficiently generic to encompass a diverse number of collaborative
context-aware systems from different domains. We also believe that the CCA taxonomy
is a corner-stone in organising the concept of collaboration in context-aware systems by
specifying a goal, two main approaches, i.e., consensus-free and consensus-based, and
a set of concrete means.</p>
      <p>We envisage delegation to be an important aspect in collaborative context-aware
systems that seek optimality, e.g., traffic control, surveillance and UAV systems. In
addition, we believe that the means for indirect communication should be provided besides
the normal direct communication scheme. Finally, we do not encourage philosophical
debates about terminologies such as collaboration, cooperation and coordination. We
believe researchers should be more concise about their usage of such terms.</p>
      <p>Our future efforts will focus on designing and implementing a CCA middlware
that will support the development of collaborating intelligent context-aware entities for
scenarios ranging from augmented artefacts to WSANs.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Biegel</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cahill</surname>
          </string-name>
          , V.:
          <article-title>A framework for developing mobile, context-aware applications</article-title>
          .
          <source>In: 2nd IEEE Conference on Pervasive Computing and Communications, Percom</source>
          <year>2004</year>
          , Orlando, Florida (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baldauf</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dustdar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenberg</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>A survey on context-aware systems</article-title>
          .
          <source>International Journal of Ad Hoc and Ubiquitous Computing</source>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kotz</surname>
            ,
            <given-names>D.:</given-names>
          </string-name>
          <article-title>A survey of context-aware mobile computing research</article-title>
          .
          <source>Technical Report TR2000-381</source>
          , Dept. of Computer Science, Dartmouth College (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Dey</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abowd</surname>
          </string-name>
          , G.D.:
          <article-title>Towards a better understanding of context and context-awareness</article-title>
          .
          <source>GVU Technical Report GIT-GVU-99-22</source>
          , Georgia Institute of Technology, Atlanta,
          <string-name>
            <surname>GA</surname>
          </string-name>
          , USA 30332-
          <fpage>0280</fpage>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Pascoe</surname>
          </string-name>
          , J.:
          <article-title>Adding generic contextual capabilities to wearable computers</article-title>
          . In: Second International Symposium on Wearable Computers, Pittsburgh, Pennsylvania, USA (
          <year>1998</year>
          )
          <fpage>92</fpage>
          -
          <lpage>99</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Schilit</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Adams</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Want</surname>
          </string-name>
          , R.:
          <article-title>Context-aware computing applications</article-title>
          .
          <source>In: IEEE Workshop on Mobile Computing Systems and Applications</source>
          , Santa Cruz, CA, US (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. Ma¨ntyja¨rvi, J.,
          <string-name>
            <surname>Himberg</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huuskonen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Collaborative context recognition for handheld devices</article-title>
          .
          <source>In: PERCOM '03: Proceedings of the First IEEE International Conference on Pervasive Computing and Communications</source>
          , Washington, DC, USA, IEEE Computer Society (
          <year>2003</year>
          )
          <fpage>161</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Siegemund</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>A context-aware communication platform for smart objects</article-title>
          . In: Pervasive Computing: Second International Conference,
          <string-name>
            <surname>PERVASIVE</surname>
          </string-name>
          <year>2004</year>
          .
          <article-title>Number 3001 in LNCS</article-title>
          , Linz/Vienna, Austria, Springer-Verlag (
          <year>2004</year>
          )
          <fpage>69</fpage>
          -86 Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Farinelli</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Iocchi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nardi</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Multirobot systems: a classification focused on coordination</article-title>
          .
          <source>In: Systems, Man and Cybernetics</source>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <article-title>IEEE Transactions on</article-title>
          . Volume
          <volume>34</volume>
          . (
          <year>2004</year>
          )
          <fpage>2015</fpage>
          -
          <lpage>2028</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Grocholsky</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Durrant-Whyte</surname>
          </string-name>
          , H.:
          <article-title>Anonymous cooperation in robotic sensor networks</article-title>
          .
          <source>In: American Association for Artificial Intelligence</source>
          , AAAI-04 Workshop on Sensor Networks, San Jose, California , USA (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Beigl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gellersen</surname>
          </string-name>
          , H.:
          <article-title>Smart-its: An embedded platform for smart objects</article-title>
          .
          <source>In: Proc. Smart Objects Conference (SOC</source>
          <year>2003</year>
          ), Grenoble, France (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. : (http://www.smart
          <article-title>-its</article-title>
          .org/)
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Holmquist</surname>
            ,
            <given-names>L.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mattern</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schiele</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alahuhta</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beigl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gellersen</surname>
            ,
            <given-names>H.W.</given-names>
          </string-name>
          :
          <article-title>Smart-its friends: A technique for users to easily establish connections between smart artefacts</article-title>
          .
          <source>In: Proc. Ubicomp</source>
          <year>2001</year>
          . Number 2201 in LNCS, Springer-Verlag (
          <year>2001</year>
          )
          <fpage>116</fpage>
          -
          <lpage>122</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Gellersen</surname>
            ,
            <given-names>H.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beigl</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Multi-sensor context-awareness in mobile devices and smart artifacts</article-title>
          .
          <source>ACM journal of Mobile Networks and Applications (MONET) 7</source>
          (
          <issue>5</issue>
          ) (
          <year>2002</year>
          )
          <fpage>341</fpage>
          -
          <lpage>351</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Strohbach</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gellersen</surname>
            ,
            <given-names>H.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kortuem</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kray</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Cooperative artefacts: Assessing real world situations with embedded technology</article-title>
          .
          <source>In: UbiComp</source>
          <year>2004</year>
          :
          <article-title>Ubiquitous Computing: 6th International Conference</article-title>
          ,Proceedings, Nottingham, UK (
          <year>2004</year>
          )
          <fpage>250</fpage>
          -
          <lpage>267</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Ricci</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Viroli</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Omicini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Environment-based coordination through coordination artifacts</article-title>
          . In:
          <article-title>Environments for Multi-Agent Systems</article-title>
          , First International Workshop,
          <year>E4MAS 2004</year>
          , New York, NY, USA, Springer (
          <year>2004</year>
          )
          <fpage>190</fpage>
          -
          <lpage>214</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Dorigo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bonabeau</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Theraulaz</surname>
          </string-name>
          , G.:
          <article-title>Ant algorithms and stigmergy</article-title>
          .
          <source>Future Gener. Comput. Syst</source>
          .
          <volume>16</volume>
          (
          <issue>9</issue>
          ) (
          <year>2000</year>
          )
          <fpage>851</fpage>
          -
          <lpage>871</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Omicini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zambonelli</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>TuCSoN: a coordination model for mobile information agents</article-title>
          . In
          <string-name>
            <surname>Schwartz</surname>
          </string-name>
          , D.G.,
          <string-name>
            <surname>Divitini</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brasethvik</surname>
          </string-name>
          , T., eds.
          <source>: 1st International Workshop on Innovative Internet Information Systems (IIIS'98)</source>
          , Pisa, Italy, IDI - NTNU, Trondheim (Norway) (
          <year>1998</year>
          )
          <fpage>177</fpage>
          -
          <lpage>187</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Navarro-Serment</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grabowski</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paredis</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khosla</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Millibots.
          <source>IEEE Robotics and Automation Magazine</source>
          <volume>9</volume>
          (
          <issue>4</issue>
          ) (
          <year>2002</year>
          )
          <fpage>31</fpage>
          -
          <lpage>40</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Parunak</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brueckner</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Odell</surname>
          </string-name>
          , J.:
          <article-title>Swarming coordination of multiple uav's for collaborative sensing</article-title>
          .
          <source>In: 2nd AIAA “Unmanned Unlimited” Systems Technologies and Operations Aerospace Land and Sea Conference</source>
          , Workshop and Exhibition, San Diego, California, USA (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Melodia</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pompili</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gungor</surname>
            ,
            <given-names>V.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Akyildiz</surname>
            ,
            <given-names>I.F.</given-names>
          </string-name>
          :
          <article-title>A distributed coordination framework for wireless sensor and actor networks</article-title>
          .
          <source>In: MobiHoc '05: Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing</source>
          , New York, NY, USA, ACM Press (
          <year>2005</year>
          )
          <fpage>99</fpage>
          -
          <lpage>110</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Liu,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Guibas</surname>
          </string-name>
          ,
          <string-name>
            <surname>L.</surname>
          </string-name>
          , Reich, J.:
          <article-title>Collaborative signal and information processing: An information directed approach</article-title>
          .
          <source>Proceedings of the IEEE 91(8)</source>
          (
          <year>2003</year>
          )
          <fpage>1199</fpage>
          -
          <lpage>1209</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Reich, J.:
          <article-title>Information-driven dynamic sensor collaboration</article-title>
          .
          <source>IEEE Signal Processing Magazine</source>
          <volume>19</volume>
          (
          <issue>2</issue>
          ) (
          <year>2002</year>
          )
          <fpage>61</fpage>
          -
          <lpage>72</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Randell</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Turing memorial lecture facing up to faults</article-title>
          .
          <source>Computer Journal</source>
          <volume>43</volume>
          (
          <issue>2</issue>
          ) (
          <year>2000</year>
          )
          <fpage>95</fpage>
          -
          <lpage>106</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Sterritt</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bustard</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          :
          <article-title>Autonomic computing - a means of achieving dependability?</article-title>
          <source>In: 10th IEEE International Conference on Engineering of Computer-Based Systems (ECBS</source>
          <year>2003</year>
          ), Huntsville, AL, USA (
          <year>2003</year>
          )
          <fpage>247</fpage>
          -
          <lpage>251</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Sterritt</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bustard</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          :
          <article-title>Towards an autonomic computing environment</article-title>
          .
          <source>In: 14th International Workshop on Database and Expert Systems Applications (DEXA'03)</source>
          , Prague, Czech Republic, IEEE Computer Society (
          <year>2003</year>
          )
          <fpage>699</fpage>
          -
          <lpage>703</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>