<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The NoTube BeanCounter: Aggregating User Data for Television Programme Recommendation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Chris van Aart</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lora Aroyo</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dan Brickley</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vicky Buser</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Libby Miller</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Minno</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michele Mostarda</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Davide Palmisano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yves Raimond</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Guus Schreiber</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ronald Siebes</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Asemantics Srl</institution>
          ,
          <addr-line>Rome</addr-line>
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>BBC - Future Media &amp; Technology</institution>
          ,
          <addr-line>London</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Introduction: Television Meets the Social Web</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>VU University Amsterdam</institution>
          ,
          <country country="NL">the Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we present our current experience of aggregating user data from various Social Web applications and outline several key challenges in this area. The work is based on a concrete use case: reusing activity streams to determine a viewer's interests and generating television programme recommendations from these interests. Three system components are used to realise this goal: (1) an intelligent remote control: iZapper for capturing viewer activities in a cross-context television environment; (2) a backend: BeanCounter for aggregation of viewer activities from the iZapper and from di erent social web applications; and (3) a recommendation engine: iTube for recommending relevant television programmes . The focus of the paper is the BeanCounter as the rst step to apply Social Web data for viewer and context modelling on the Web. This is work in progress of the NoTube project4.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>is also duplicated across the various social sites. The user is left with multiple
user pro les, which are not or weakly related (syntactically only) to each other.</p>
      <p>Maintaining rich pro les of a wide variety of content (in various formats and
areas) is critical for providing adequate personalisation services.
Recommendation services are a typical example of applying such rich user pro les. However,
most approaches today are still limited to simple recommendations (\Take a
look at this programme"), similarity-based recommendations (\if you watch this
programme, you will probably like this other one in the same category") or
collaborative ltering (such as Amazon's \Customers who bought this item also
bought..."). Services for television recommendations, such as 4IP's \Test Tube
Telly"5, tend to be restricted to using data from one system only. An advanced
mechanism for social recommendations will now need to consider a completely
di erent landscape, with di erent pro les of users, user groups and audience
segments. Opening and sharing of pro les and attention data between di erent
players in the market creates a di erent environment - an open social graph where
a wider user base can potentially help to improve the quality of recommendations
and reduce the costs of moderation and spam ltering.</p>
      <p>In the NoTube project [3] we develop a exible end-to-end architecture, based
on semantic technologies, for personalised creation, distribution and
consumption of television content. The project takes a user-centric approach to investigate
fundamental aspects of consumers' content-customisation needs, interaction
requirements and entertainment wishes, which will shape the future of \television"
in all its new forms. In this paper we focus on the rst part of this project -
combining multiple heterogeneous sources of data, with the addition of semantics,
in order to create machine-readable pro les for users. These can later be used
to drive better, more focused and more personalised television and other
media recommendation services in a personalised, service-based EPG (Electronic
Programme Guide)[4]. NoTube aims to overcome a number of limitations in
current EPGs. Imagine how to recommend and search for programmes that are:
(a) non- ction, (b) produced between 1989 and 1995, (c) involve locations in
Eastern Europe. In current EPGs (a) is resolvable, while (b) is not, but it is
possible if the programme information has been properly indexed along a given
timeline; (c) also requires appropriate metadata, and a sophisticated knowledge
model that allows to represent and reason about part/whole relations between
places, countries and regions (e.g., So a is in Bulgaria; Bulgaria is in Eastern
Europe). In the project we hope to gain insights into end-user control, user
understanding of aggregation of data about them, privacy-preserving architectures,
and constraints on reuse of data as data of this kind is both potentially privacy
invasive and also valuable.</p>
      <p>In this paper we describe the design and implementation aspects of the
BeanCounter as the user data collecting component, aiming to illustrate its potential
application in the Social Web domain. The main design rationale is to provide
a exible and extensible architecture that exposes robust, scalable and reliable
services to handle di erent kind of responses of di erent social applications
plat</p>
    </sec>
    <sec id="sec-2">
      <title>5 http://testtubetelly.channel4.com/</title>
      <p>forms. A set of APIs allows for modelling the targeted responses to gather them,
represent them with a set of suitable RDF vocabularies, and integrate them with
other pulled information in a fully transparent way. Thus, we outline a loosely
coupled architecture based on service-speci c adaptors (tubelet ) and application
servers (modelet ), which allow for the selection of data source and RDF
vocabulary and the generation of RDF-ised user data, integrated with data coming
from other adaptors. We aim to create a set of language-independent, RESTful
APIs that allow for writing adaptors, analysers, data management and
enrichment components to di erent user data services. An important requirement is
maintaining a transparent privacy policy useful to users with di erent levels of
technical skills.</p>
      <p>In the following sections we report on our work on the BeanCounter with
usage scenarios, alignment approaches, design and implementation details of the
BeanCounter implementation, and lessons learned. In these lessons we outline
some of the challenges related to realising not only the data collection
component, but the whole work ow for achieving reuse of user data from Social Web.
2
2.1</p>
      <sec id="sec-2-1">
        <title>Beancounter Usage Scenarios</title>
        <sec id="sec-2-1-1">
          <title>Scenarios for end-users</title>
          <p>Bob has accounts at several social websites: Facebook, Twitter, YouTube, and
Last.fm. He regularly uses NoTube `favourites' feature. For each website he has
a separate identity and has indicated a basic set of personal information and
interests. Each application also carries his (partial) user pro le, with his
interests related to that application, e.g., music preferences or travel history. There
is limited integration of such user pro les, typically not under user's control and
lacking transparency of how data is interpreted in di erent applications. It is
di cult for Bob to nd out what each system knows about him and how this
knowledge is derived. Moreover, he cannot easily nd out his interests and
personal statistics based on what he watches, listens to or consumes on di erent
devices (e.g., interactive television, online TV guide, Net ix, YouTube). Using
the BeanCounter, Bob can:
{ log into the BeanCounter using his openID (or create an account).
{ link a few of his accounts and devices to the BeanCounter, e.g., his
Twitter and YouTube accounts. He could also link an iZapper (a combined
EPG, remote, and context capturing device) to his account. After linking, the
service-speci c adaptors (`tubelets') pull the data, convert it to an activity
stream in RDF using service-speci c algorithms, and store it in a triple store.
{ view some statistics about himself . Bob can immediately see some
interesting information about himself in the BeanCounter web Interface (see
Figure 1). A machine-readable pro le can also be created.
{ edit the result, e.g., delete all the data or remove sources. He can decide
which part of the pro le to make public, if any. The nal result is a
machinereadable pro le that Bob uses to describe himself on the Semantic Web.
Jane is privacy-conscious, technically minded and able to do some programming.
She has the latest television-PVR that outputs data using the XMPP protocol;
her friends have similar devices. She uses an iZapper to indicate programmes
she watches, records or likes and dislikes. For example, Jane can:</p>
          <p>Download and run her own version of the BeanCounter and attach
her BeanCounter to her and her friends' PVR accounts, with their
permission. In this way, she can make queries over the combined set of interests
to see if she can make some recommendations about what to watch, by querying
the dataset to see what is mostly commonly watched and then querying schedules
to lter what's on next week.
2.3</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Scenarios for developers</title>
          <p>George is a developer who wants to use the BeanCounter for tracking
attention data from YouTube. Suppose an instance of the BeanCounter
is currently deployed and is running on beancounter.asemantics.com, but is
pulling data only from Twitter. George wants to track YouTube favourites as
well. He studies the response (Atom feed) from the YouTube API and writes
couple of Java classes to represent this data. He chooses an appropriate
authentication mechanism (e.g., http Basic Auth, or OAuth). Then he needs to
select the appropriate RDF vocabulary to represent this user data. FOAF is
suitable to represent the static part of the YouTube pro le (e.g., the property
foaf:interest links users to the tags associated with every YouTube favourite
video. All he needs is to add a minimal set of annotation to his Java classes
stating this. Once the lines of code are written, he uses the Management Interface
beancounter.asemantics.com/manager and adds the YouTube Tubelet to the
BeanCounter. The tubelet is immediately available to be used without restarting
the whole service.</p>
          <p>Consider now Chris, who is a developer at OpenCalais annotation service 6
and wants to enrich identi.ca tweets. The goal is to build a Identi.ca Tubelet
that pulls the Identi.ca tweets, annotates them using the Opencalais and stores
the resulting triples. Similarly to the previous scenario, Chris rst needs to make
a new tubelet. Then, he writes a small Java class called a Pipe, to indicate that
Opencalais service is to be used to process the text of the tweet, and nally,
he binds the new tubelet to that pipe. Thus, the output of this tubelet will be
processed by this pipe and then stored (see Figure 2). The same scenario can be
realised with other annotation services, such as KIM annotation service7.</p>
          <p>Another example is Anna, a BBC producer, who wants to nd out whether
people liked the new episode of `Torchwood'. Suppose the BBC has an instance
of the BeanCounter running and aggregates and anonymises data using the
public APIs from Twitter and Facebook combined with statistics from iPlayer.
Anna writes custom queries for Torchwood and a custom analyser that evaluates
whether each activity data item was positive or negative. With the `aggregation'</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>6 http://viewer.opencalais.com</title>
    </sec>
    <sec id="sec-4">
      <title>7 http://www.ontotext.com/kim</title>
      <p>feature - over all user data in the system - she creates custom queries to nd out
what the overall opinions were and when people tended to watch it.
3</p>
      <sec id="sec-4-1">
        <title>BeanCounter Metadata and Resources</title>
        <p>More and more data gets published in RDF[8] most notably as linked data[6],
[5]: generic knowledge such as DBpedia8, domain data such as instances from
the BBC Programmes Ontology9, linguistic data such as W3C Wordnet10,
service data like instances of WSMO-lite11, or user data like FOAF12 instances
or MySpace metadata13. This knowledge can be accessed in various ways[9]:
SPARQL endpoints, Java APIs, and REST-style Web-services. Programmers
combine these sources to create migration services for other programmers or
end-user (mashup) applications. In developing the BeanCounter we have
experienced ve key aspects relevant to the enrichment of the user attention data:
{ Selection of resources: The growth of the Semantic data cloud allows us
to integrate more external data sources relevant to the NoTube domain. The
selection of the right sources is a non-trivial challenge. First, we need to list
candidate sources that possibly could contribute to our user-scenarios.
Second, for those sources we need to estimate the quality in terms of
completeness, reasoning complexity, errors and stability. Third, we need to determine
the e ort needed to align the schemas to connect the source with the other
NoTube vocabularies.
{ Creation of alignments: The growth of the Semantic data cloud allows
more and more interesting alignments for the NoTube domain between the
di erent data sources in the cloud. For example, in one of the case studies we
are working on making alignments in SKOS to aligning the Last.fm music
categories with programme data described in the BBC Programmes ontology.
{ Creation of connections: The linked data cloud contains commonly used
URIs representing entities suitable for reuse, for example Dbpedia concepts.
Where possible we reuse URIs in this way to create a more connected graph.</p>
        <p>The challenge is to nd and easily reuse the relevant URIs
{ Usage of the alignments and connections: When the alignments are
created, we cannot assume that all data that we want to align will be in
one single SPARQL repository. For this we need to develop alignment
services that themselves generate RDF on request or have another interface
(e.g., REST-style). For example, a type of alignment that gets special
attention in NoTube because of non-English content provided in the case-studies:
multi-lingual aspects related to mapping Korean, Italian, Dutch and German</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>8 http://dbpedia.org/</title>
    </sec>
    <sec id="sec-6">
      <title>9 http://purl.org/ontology/po/</title>
      <p>10 http://www.w3.org/TR/wordnet-rdf/
11 http://www.wsmo.org/ns/wsmo-lite/
12 http://xmlns.com/foaf/spec/
13 http://grasstunes.net/ontology/myspace/myspace.html
content to English (and back). For example, we are working on mapping the
Dutch Cornetto `Wordnet' to the W3C Wordnet, to obtain more (English)
information for topics annotated originally in Dutch.
{ Domain dynamics: The world is dynamic, so also the metadata describing
this world, such as user pro le data, programmes and similar. It is the
challenge to get the alignments as quickly as possible and to inform the interested
parties. For this a publish/subscribe mechanism where people can place a
`listener' on the topics of interest would make life easier. Current research
such as Jqbus (a SPARQL service over the XMPP protocol) is ongoing to
tackle this challenge.</p>
      <p>The BeanCounter is a user data aggregation component: it aggregates data
about a user from Web sources (e.g., Facebook, Last.FM, Twitter) and produces
a machine-readable interest pro le for this user. The BeanCounter produces also
a human-readable version of this pro le by emphasising on the transparency
and privacy-preserving aspects, as well as on the adequate and appealing
presentation of the information. However, in the context of this paper, we focus
on its input to semantic recommendation services. In the prototype version of
the overall NoTube architecture (see Figure 3) the BeanCounter is accompanied
by a controlling device iZapper that captures a viewer's log in speci c contexts
(e.g., I am at home, working, in the evening) and a viewer/recommender
service iTube, which uses the input from the BeanCounter and iZapper in order to
generate and present the recommended relevant television programmes to the
user. iTube is typically a media centre environment, which can make use of the
semantic recommendation service's output via an API. It could also be a
display on a computer or a smaller device. Currently, we plan to use the iFanzy[2]
recommendation service, although the architecture will allow for many di erent
recommendations services to be used.
4
4.1</p>
      <sec id="sec-6-1">
        <title>Prototype Architecture</title>
        <p>iZapper: the controlling device
Traditionally televisions are controlled with dedicated remote controls. To
develop a viewer's pro le, the viewer has to explicitly con gure it. iZapper (see
Figure 4) is a native iPhone application intended to control a media centre
(typical device and channel control), capture a viewer's context, and to record
viewer's activities to gain more information. Several types of contextual
information can be retrieved from a user's iPhone, such as location (via GPS or wi
endpoint), period (via time) and optionally tasks (via agenda).</p>
        <p>Television activities, such as watching, recording, ranking and bookmarking
will be recorded. All this information will be pushed to the BeanCounter. Every
viewer has access to his personal iZapper. The BeanCounter will use the iZapper
as one source of user information for context, activities and pro le. The context
is the circumstances and surroundings of a viewer, composed of location, period
and task, and is relevant for the fact that viewers' ratings are potentially made
to hold in a speci c context.
4.2</p>
        <sec id="sec-6-1-1">
          <title>The BeanCounter Architecture</title>
          <p>The BeanCounter itself can be modelled as a container of components (called
tubelets) responsible for calling and managing a service the user wants to import
into the system. Thus, each Web source (e.g., FriendFeed, Twitter, Glue, last.fm)
is wrapped and accessed by a tubelet, that is speci cally built to handle data from
that source. Whatever the format is adopted by the Web service response (e.g.,
ATOM, JSON or some sort of XML), a tubelet can parse such content and map
it to RDF, allowing a native integration. All the logic for tubelet management
are embedded in the container determining the life cycle of each tubelet.</p>
          <p>
            The activation of each tubelet involves (
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) a Scheduler: scheduling the
activation of the tubelet, (
            <xref ref-type="bibr" rid="ref2">2</xref>
            ) a User ID Manager: mapping all the di erent credentials
of the user and (
            <xref ref-type="bibr" rid="ref3">3</xref>
            ) a computation of the range of data to be pulled out of the
source. The tubelet takes as input the result of the speci c source API (e.g.,
information about the user in that source), parses it and creates some Java beans
that model the data pulled from that source. The beans are serialised by the
tubelet and passed as input to a Pipeline. The pipeline processes the serialised
beans and produces an RDF representation of the data. It is essentially a pipe
with a number of (optional) steps which eventually will produce RDF to be fed
into the RDF storage. Apart from the mandatory step for RDF conversion, other
additional steps could follow to perform some ad-hoc actions on this semantic
representation. The described data ow is oriented to a speci c Web source.
Data are pulled directly from the source and some automatic lifting is made in
order to get a semantic representation of the source content (as a set of triples).
          </p>
          <p>Another ow is more model-driven rather than source-driven. A container
is devised, where a modelet is the equivalent of a tubelet. A modelet can
accept structured user data in a cross-source fashion. Every time a new piece of
information is needed, a new modelet is plugged into the modelets container. A
modelet feeds then into the proper java beans, which just as for the rst ow
are serialised and passed to a pipeline. A key aspect of the BeanCounter is
illustrated by this last component. It allows for aggregation of data from di erent
Web sources, where the same entity - movie, city, person - within two sources
should be unambiguously identi ed and mapped in both sources. Adding a new
modelet to the modelets container is possible by calling speci c APIs de ning the
desired modelet (e.g., how it is structured, which elds) and then hot-plugging
it into the container.
5</p>
        </sec>
      </sec>
      <sec id="sec-6-2">
        <title>Discussion</title>
        <p>
          During the requirements phase and implementation of two BeanCounters and the
iZapper, we have identi ed several challenges to realise e ective reuse of Social
Web user data in domain-speci c recommender system (for TV programmes).
Some challenges have already been addressed in the development of
BeanCounters. From this work result also some practical guidelines for semantic data
integration. We now give a brief overview of those points, starting with Lessons
Learned for Semantic Data Aggregators:
{ Pipelines. To ingest data into the system through pipelines, which are
dynamically editable (e.g., add new pipe elements in speci c points of the
chain). Some pipes could be responsible for collecting statistics on certain
triple data, while others could handle the building of speci c indexes.
{ Container Programming Pattern. This pattern manages the addition
and removal of components, as well as their life cycle, inter-communication
and activation. It provides an abstraction that hides the complexity of the
system, allowing developers to ignore low level functionalities. Some system
components are subjected to change depending on the evolution of the
services they are written for, so the system should support the addition and
the replacement of dynamic components without requiring a restart.
{ Push, Pull and Trigger. A exible system should gather data (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
manually, (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) by scheduled activities for getting data from external services, and
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) in trigger mode with manual activation of collecting component
{ Hybrid Storage. A triple store is good for persisting semi-structured data,
but does not support structured data indexing (e.g., dates or geo
coordinates). To get a good response time in data retrieval and ltering, a hybrid
data storage handling indexes persisted in canonical Relational database
tables and associated to RDF triples could be useful.
{ Avoid working directly with triples. Using an RDF2Object mapping
library allows the data handling logic to be expressed programmatically as
object manipulation. It also encourages the (re)use of ontologies as libraries.
{ Native Support to Track Data Evolution. Data retrieved in two
subsequent points of time typically mixes old already ingested with new
information. This introduces resource wastage and sometimes also data
inconsistencies. Therefore, native support for managing the "delta" of data to avoid
this kind of problems could be useful.
        </p>
        <p>Challenges for User Data Reuse on the Social Web:
{ Presentation of Aggregated User Data</p>
        <p>
          How to maintain user's motivation to add new data and achieve good
con uence of the content and the context?
What are non obtrusive ways of making the user aware of the privacy
implications of her actions?
How to make user aware of the current (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) presentation context, (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
context model related to privacy and (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) adaption strategies?
        </p>
        <p>How to keep user in control of pro les, content and context data used?
{ Privacy</p>
        <p>How to handle privacy at the architectural level?
How to achieve user management without asking users, e.g., for
services username/password, or to create username/password for the
BeanCounter?
How to help users understand the privacy issues when reusing and
aggregating their data?</p>
        <p>How to manage data reuse, e.g., by using creative commons licensing?
{ Context Capturing</p>
        <p>What are useful contexts, e.g., temporal, spatial, task-, device-related?
How to capture user's context (semi-)automatically, i.e., minimise the
explicit user input, while maximising the background collection of user
and context data, as well as deductions from this data?
{ Data Aggregation and Enrichment</p>
        <p>How to achieve uni ed and simple access to dynamic, growing and
distributed multimedia content of diverse formats?
How to determine, which information from the activity stream is useful
for determining user's interests and what enrichment is relevant, useful
and accurate in di erent user contexts?
What is a minimal amount of data to start up the personalisation
process, preventing a `cold start' ?
How to represent user data statistics, strength of user interest and user
context in machine-readable format, e.g., RDF?
How to exploit current standards in television content metadata available
both for providers and consumers</p>
      </sec>
      <sec id="sec-6-3">
        <title>Conclusions</title>
        <p>Our current experience of aggregating user data from various Social Web
applications is presented in this paper. The BeanCounter is as a user data aggregation
component for the NoTube architecture. The usage scenarios motivate the
relation between TV recommendation and Social Web. To aggregate heterogeneous
data, we have incorporated alignment support in our design and prototype
implementation. Finally, we presented the lessons learned from the prototype
implementation regarding the di erent challenges that need to be tackled and further
work to be done.</p>
      </sec>
      <sec id="sec-6-4">
        <title>Acknowledgments</title>
        <p>This work is supported by the EU IP Project NoTube, see http://www.notube.tv.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>L.</given-names>
            <surname>Ardissono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kobsa</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Maybury</surname>
          </string-name>
          . Personalized Digital Television: Targeting Programs to Individual Viewers (Human-Computer Interaction Series,
          <volume>6</volume>
          ). Kluwer Academic Publishers, Norwell, MA, USA,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>L.</given-names>
            <surname>Aroyo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bellekens</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Bjorkman, and</article-title>
          <string-name>
            <given-names>G.-J.</given-names>
            <surname>Houben</surname>
          </string-name>
          .
          <article-title>Semantic-based framework for personalised ambient media</article-title>
          .
          <source>Multimedia Tools Appl.</source>
          ,
          <volume>36</volume>
          (
          <issue>1-2</issue>
          ):
          <volume>71</volume>
          {
          <fpage>87</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>L.</given-names>
            <surname>Aroyo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kaptein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Palmisano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Conconi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nixon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vignaroli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Nufer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Yankova</surname>
          </string-name>
          .
          <article-title>Notube making tv a medium for personalized interaction</article-title>
          .
          <source>In EuroITV 2009 Networked Television</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>P.</given-names>
            <surname>Bellekens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Aroyo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.-J.</given-names>
            <surname>Houben</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kaptein</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          K. van der Sluijs.
          <article-title>Semanticsbased framework for personalized access to tv content: The ifanzy use case</article-title>
          .
          <source>In ISWC/ASWC</source>
          , pages
          <volume>887</volume>
          {
          <fpage>894</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Heath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Idehen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Berners-Lee</surname>
          </string-name>
          .
          <article-title>Linked data on the web (ldow2008)</article-title>
          .
          <source>In WWW</source>
          , pages
          <volume>1265</volume>
          {
          <fpage>1266</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>T. B.-L. Christian</surname>
            <given-names>Bizer</given-names>
          </string-name>
          , Tom Heath.
          <article-title>Linked data - the story so far</article-title>
          .
          <source>Journal on Semantic Web and Information Systems (IJSWIS)</source>
          ,
          <source>Special Issue on Linked Data(4)</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>T.</given-names>
            <surname>Coppens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Trappeniers</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Godon</surname>
          </string-name>
          .
          <article-title>Amigotv: towards a social tv experience</article-title>
          .
          <source>In EuroITV</source>
          , pages
          <volume>159</volume>
          {
          <fpage>162</fpage>
          ,
          <string-name>
            <surname>Brighton</surname>
          </string-name>
          , U.K.,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>E. M. Frank</given-names>
            <surname>Manola</surname>
          </string-name>
          . Rdf primer.
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>G.</given-names>
            <surname>Kobilarov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Scott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Raimond</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Oliver</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sizemore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Smethurst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Lee</surname>
          </string-name>
          .
          <article-title>Media meets semantic web - how the bbc uses dbpedia and linked data to make connections</article-title>
          .
          <source>In ESWC</source>
          , pages
          <volume>723</volume>
          {
          <fpage>737</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>