<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Linked Data Authoring for Non-Experts</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Markus Luczak-R o¨sch</string-name>
          <email>luczak@inf.fu-berlin.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ralf Heese</string-name>
          <email>heese@inf.fu-berlin.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Freie Universita ̈ t Berlin, AG Corporate Semantic Web</institution>
          ,
          <addr-line>Takustr. 9, D-14195 Berlin</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Freie Universita ̈ t Berlin, AG Corporate Semantic Web</institution>
          ,
          <addr-line>Takustr. 9, D-14195 Berlin</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2009</year>
      </pub-date>
      <fpage>20</fpage>
      <lpage>24</lpage>
      <abstract>
        <p>The vision of the Semantic Web community is to create a linked \Web of data" providing ubiquitous data access via machine understandable links between data resources. Due to the need of enormous expertise for producing and consuming linked data, the idea of linked data emerges only slowly and only a few data sources are currently available as linked data. In this paper we present Loomp to facilitate an increasing use of the Web data. Loomp enables non-experts to produce and publish semantically annotated content as easy as formatting text in word processors. Furthermore, Loomp simpli es the reuse of content with di erent Web applications.</p>
      </abstract>
      <kwd-group>
        <kwd>linked data</kwd>
        <kwd>content authoring and publishing</kwd>
        <kwd>Web of data</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. INTRODUCTION</title>
      <p>Over the past years authoring content for the Web
underran an interesting evolution which is well-known as the
evolution from a Web 1.0 in the past towards the so-called
Web 2.0 in the present. We state that the decisive
catalyst for the evolution step from Web 1.0 to Web 2.0 was
the broad success of blogging. Easy-to-use tools such as
WordPress enabled a critical mass of people to change their
behavior on the Web from consume-oriented to an interplay
of consuming and publishing. In addition, people apply wiki
systems as appropriate means for managing large knowledge
bases collaboratively. Both developments show that
ordinary Web users nowadays have a higher technical a nity
and understanding of the network e ects on the Web than
in times of Web 1.0. Authoring content online has become
a familiar task.</p>
      <p>Likewise, we believe that the principles of linked data will
change once more the perception of the Web and
publishing content as linked data will become commonplace. At
the moment, most linked data sources o er data that
originally resides in relational databases. Wrappers convert this
data into RDF format automatically. Besides the
availability of database wrappers, we think that the broad success
of linked data in everyday usage also depends on the
availability of authoring tools that enable ordinary Web users {
i.e. non-experts regarding to semantic web technologies {
to publish linked data. Currently, there exists tool support
for editing metadata or adding semantics to wikis, but to
our best knowledge tools are missing that allow non-expert
users to enrich content such as text and multimedia objects
with detailed semantics.</p>
      <p>In this paper we introduce Loomp as a system that allows
every Web user to create semantically enriched content as
easy as to format text in a word processor. A user can easily
publish the same content to various applications such as Web
browsers, blogs, and wikis. Additionally, Loomp features an
integrated linked data server for publishing the content as
linked data (Figure 1).</p>
      <p>The paper is structured as follows: As an illustrating
example of using Loomp in a real world scenario, we report our
results of interviewing journalists and editors in Section 2.
In Section 3 we give an overview of the Loomp architecture
and describe key points of Loomp in detail. In Section 4
we present related work and in Section 5 we conclude and
outline our future work intentions.</p>
    </sec>
    <sec id="sec-2">
      <title>THE JOURNALISTS USE CASE</title>
      <p>Nowadays we can nd interesting use cases with
underlying business models in literature that are based on linked
data contained in webpages. The BBC for example
developed a system that utilizes automatic enrichment of content
to increase the visiting time on their website1. By providing
links to information sources of their own website that are
related to the currently shown website, a user does not need to
search external sources for further information. In general,
it is more complex to identify the business value of
publishing linked data on the Web { especially the advantage over
search engines does not hold as an argument. For example,
we have to distinguish between the automatic generation
of linked data based on conventional data sources and the
manual enrichment of Web contents. In the rst case, the
creation and publishing of linked data does not cause any
additional e ort for the author. In the latter case, an
author has to put more e ort into the creation of Web content,
because he has to face the additional task of annotation.</p>
      <p>In this section we describe a use case in the domain of
journalism which illustrates the added value of manual
creation of linked data. This journalists use case is
representative for the domain of content and knowledge intensive
work in a heterogeneous environment. We personally
interviewed journalists and editors of publishing houses which
are typically working in the areas of print publishing, online
publishing, and cross-media publishing. The self-conception
of both groups is a bit overlapping, because sometimes
freelancers writing articles call themselves editors. However, in
this paper we distinguish these two groups on the basis of
their tasks: journalists research and write articles and
editors revise and publish the work of journalists- Journalists
may be employed or work as freelancers. As a matter of
fact, however, many freelance journalists have a contract
with only a single publishing house.</p>
      <p>Freelance Journalist</p>
      <p>Research Results</p>
      <p>Article Archive
Article Archive
Employed Journalist</p>
      <p>Research Results</p>
      <p>Released Works</p>
      <p>Publishing House</p>
      <p>Editor
1http://www.bbc.co.uk/blogs/radiolabs/
text documents and email communication. In some cases,
especially in the online publishing sector, journalist enter
their articles directly into editorial managements systems or
content management systems. Furthermore, they add
appropriate categories and tags. Finally, an editor revises and
releases the articles for his department.</p>
      <p>
        During the last years several studies such as [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] recognized
an increasing importance of online publishing and claimed
for more professional journalists in online media. Classical
publishing houses and media companies are the most
important information provider on the Web. However, even these
providers need to satisfy the demands of the consumer for
cross-media publishing of contents.
      </p>
      <p>In Figure 3 we give an overview of Loomp in the journalist
use case setting. The application of Loomp is possible in two
ways. First, as a personal information management system
for the journalists to facilitate an easier search and reuse
of former research results and former articles. Second, as
an editorial management system for the publishing houses
which acts as exible means for cross-media publishing and
personalized content aggregation.</p>
      <p>Released Works</p>
      <p>Publishing House
Employed Journalist</p>
      <p>Web Data
Research Results
Article Archive</p>
      <p>Freelance Journalist</p>
      <p>Research Results
Article Archive</p>
      <p>Editor</p>
      <p>Loomp helps journalists to manage their notes, interview
logs, references, addresses, etc. The system supports users
to enrich them semantically, e.g., by an automatic
annotation assistant and an easy-to-use editor for manual
annotations. Furthermore, authors can write and manage their
articles with Loomp { the creation of semantic annotations
is possible. Especially, Loomp helps to link an article to its
information sources. Loomp provides human and machine
readable representations of the content, so that it can
easily be searched, reused, and published. In this case Loomp
serves as a Web authoring tool.</p>
      <p>On the side of the publishing houses editors use Loomp to
revise and edit articles written by journalists. Furthermore,
they can add more annotations to articles and possibly
interlink them. Finally, they choose a publishing channel for
the work (e.g. such as a blog, an RSS feed, a wiki, or print)
and release them. At this point we regard Loomp as a
distributed and collaborative Web content management system
which facilitates cross-media publishing of semantically
enriched information.</p>
      <p>The bene ts of Loomp in this context are manifold. Most
important, Loomp features a semantic search engine and
thus decreases e ort of nding information that a user has
created before. Because the system also keeps track of
provenance information, authors can retrace the sources of an
article. Since Loomp serves all content semantically enriched,
consumers can modify the presentation of it according to
their current information needs. For example, the reader
may decide to highlight all names of persons in a text. As
a consequence authors and editors are more or less freed
from the e ort of formatting texts in bold, italic, or
underlined. Based on the semantic annotations the content
provider can o er content and target group speci c services,
e.g., the BBC example of providing related information or
accurately tting commercials.</p>
    </sec>
    <sec id="sec-3">
      <title>DESIGNING A LINKED DATA EDITOR</title>
    </sec>
    <sec id="sec-4">
      <title>FOR THE MASSES</title>
      <p>Considering our use case example it comes clear that the
task of building a linked data authoring tool for a broad
range of Web users is a complex task. The design criteria
have to respect that the target group has no theoretical
understanding of what RDF and linked data is. Moreover, we
expect that the common understanding of the Web as a
network of human readable pages will not change in the near
future, so that the value of a Web of data is not very familiar
for ordinary users. To lower the barriers the compelling
simplicity of Web 2.0 applications should be transfered to the
task of creating linked data: light-weight, easy-to-use, and
easy-to-understand. In the following list we describe design
requirements on an authoring system which in our opinion
are necessary to enable non-expert users to participate in
the Web of data.</p>
      <p>Intuitive user interface The system hides the
complexity of creating linked data by providing an intuitive
user interface. It follows common mindset and uses
well-known procedures of system interaction to
produce semantic annotations. For example, every
computer user is nowadays able to select a text and to click
on a button to format it italic.</p>
      <p>Simple vocabularies Although Web users know the term
URL or Internet address, they are currently rarely
aware of namespaces. Thus, the system provides
access to vocabularies without going into technical
details. Each concept of a vocabulary has a meaningful
label and is explained in simple terms and with
examples of usage. The system supports widely accepted
vocabularies and is able to map concepts of equal or
similar meaning.</p>
      <p>Reuse of content Often the same content is published in
di erent formats, so the system has to be able to
convert the content to common formats such as PDF and
to interact with other (Web) applications such as blogs
and wikis.</p>
      <p>Support for linked data The system o ers its content as
linked data. In order to create linked data, the
system has to provide support for searching resources and
linking to them.</p>
      <p>Data authority A user decides which data is publicly
available.</p>
      <p>Easy to install The requirements for installation and
running the system are low, so that it can be installed in
most webspaces. The need for con guration is reduced
to a minimum.</p>
      <p>Some of these requirements seem to be rather visionary,
but we realize Loomp with these requirements in our mind.
We strongly focus technical requirements as well as
socioeconomic requirements. Both, but mostly the latter, stress
the goal to design an authoring tool for the Web of data
which does not contradict human mindsets.
3.1</p>
    </sec>
    <sec id="sec-5">
      <title>The Loomp System Architecture</title>
      <p>The two basic types of resources that are managed by
Loomp are fragments and mash-ups. A fragment is the
smallest piece information in Loomp and describes a closed
notional entity containing annotated text, multimedia
content, or a SPARQL query. Mash-ups are composed of an
arbitrary number of fragments. Both, fragments and
mashups, are assigned a unique identi er (URI) and can be
retrieved by dereferencing this URI. In the case of SPARQL
queries we assign two identi ers, one for the query itself and
one for the result of the query. As known from the RDF
speci cation, the identi er can be used to make statements
about them, e.g., add metadata such as the author and the
creation date.</p>
      <p>
        We designed Loomp as a typical LAMP2 compatible Web
application (see Figure 4). Loomp serves contents either in
RDF (e.g. for linked data clients) or in XHTML/RDFa [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
(e.g for Web browsers).
      </p>
      <p>
        On the server side the main components are a database for
storing the data, a linked data server for providing access to
the data as linked data, an RDF API for accessing the data
by the Loomp application, and a security/authorization
component for granting access to the data. The linked data
server and RDF API components are realized with the RAP
Pubby library [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>On the client side we distinguish between a frontend and
a backend. The term frontend comprises all clients that
retrieve data from the Loomp application without
authorization, e.g., access to all publicly available content by linked
data clients and by Web browsers. The boxes Faceted
Browsing and Faceted Viewing represent websites that exploit the
semantic annotations of the content for navigating to related
content and for changing the appearance of the content (see
Section 3.3). Loomp also features a plug-in mechnism to
allow (read and write) access to its content from Existing
2LAMP: Linux, Apache, MySQL, PHP
Web Applications. Thus, for example, it is possible to view
the annotated content as simple HTML pages, blog entries,
or wiki pages. The term backend comprises clients that
have to authorize before they can access data { typically,
these clients are allowed to modify content, e.g., by the One
Click Annotator and the content management component
(see Section 3.2). Using the Vocabulary Mgmt. component
an experienced user may add vocabularies and modify them.
3.2</p>
    </sec>
    <sec id="sec-6">
      <title>One Click Annotation and Content Management</title>
      <p>To enable users to annotate fragments as easy as
formatting text in word processors, we develop the One Click
Annotator (see Figure 5). The One Click Annotator extends
the TinyMCE3 online HTML editor to support RDFa
annotations in a WYSIWYG way. It adopts the look &amp; feel style
which is well-known from word processors for applying style
sheets to text. On the left side of the annotation toolbar a
user selects the concept for annotating a piece of text, on the
right side she chooses a vocabulary from a drop-down menu.
The e ort for annotating text semantically substitutes the
e ort of formatting text bold or italic. For example, the
user selects an email address in the text and clicks the
button Email. In a next step we plan to integrate an automatic
annotation recommender. Nevertheless the user has the
authority to reject suggested annotations. In the background
the One Click Annotator inserts RDFa annotations into the
XHTML. If a user saves her changes the XHTML/RDFa
content is send to the server which in turn extracts RDF
statements and stores them into an triple store. In detail, a
fragment is stored as an XHTML/RDFa representation as
well as a batch of extracted and assigned RDF metadata.</p>
      <p>Email Name Street Zipcode City
Email Name Address</p>
      <p>A mash-up consists of a sequence of fragments. The user
interface for modifying mash-ups exploits modern web
technologies to allow drag-and-drop and in-place editing of its
fragments. A user can extend a mash-up by creating a new
fragment at the desired place of the mash-up or searching
and dragging an existing fragment to it.</p>
      <p>Loomp makes use of the semantic annotations for
searching fragments and mash-ups. For example, if a search term
has been annotated with di erent concepts, then the result
items are grouped and displayed according to these concepts.
In a second step, a user can re ne the search by retrieving
the remaining fragments of a group. A fragment can be
contained in many di erent mash-ups, it is even possible that
a mash-up contains the same fragment more than once. To
comply with the linked data principles a user can link
fragments and mash-ups to other resources on the Web, e.g.,</p>
      <sec id="sec-6-1">
        <title>3http://tinymce.moxiecode.com/</title>
        <p>Loomp references resources in DBPedia to support unique
identi ers.
3.3</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Consumer-oriented Presentation</title>
      <p>Nowadays, if an author decides to emphasize a text phrase
that seems to be important to her, she may format it using
an italic font or select a di erent font color. A consumer
of the content is unable to change the appearance in order
to facilitate the accomplishment of a speci c task, e.g., a
consumer would like to highlight phrases (i.e. all names of
persons belonging to a working group) that are important
to decide on the relevance of a webpage for conducting a
search on a topic.</p>
      <p>Loomp aims at supporting consumer-oriented
presentation of content. Typically, the content managed by Loomp
is delivered in XHTML/RDFa format and, thus, it contains
semantic annotations. Using Loomp the appearance of the
content is de ned by cascading style sheets (CSS). By
separating content from appearance an author who uses Loomp
has still the possibility to exert in uence on the appearance
of the content by changing an existing style sheet or
providing a user-speci c one.</p>
      <p>In contrast to current Web pages Loomp also allows
consumers to change the appearance of the content according
to her current needs. By means of a toolbar a consumer can
format semantically annotated phrases, e.g., she can
highlight the names of members of a speci c working group with
a yellow background. We call this feature faceted viewing.
4.</p>
    </sec>
    <sec id="sec-8">
      <title>RELATED WORK</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] Heath et al. divided the creation of linked data
into the following steps: i) select vocabularies, ii) partition
the RDF graph into \data pages", iii) assign an URI to each
data page, iv) create HTML variants of each data page, v)
assign an URI to each entity, vi) add page metadata and
more links, and vii) add a semantic sitemap. With Loomp
we follow these steps for creating linked data. Using the
One Click Annotator a user selects from a set of
vocabularies that reuses existing ontologies (i). Considering Loomp
we distinguish between fragments and mash-ups which are
automatically assigned URI in the background. The content
is published in HTML format beside other (ii{v). The user
may also add meta data to fragments and mash-ups (vi).
Last but not least the Loomp server generates a sitemap of
all publicly available content.
      </p>
      <p>
        Tools for creating semantically enriched content include
semantic wiki and semantic tagging engines. Examples for
semantic wikis are OntoWiki [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], Ikewiki [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], or Semantic
MediaWiki [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. These wikis extend traditional wikis by
functionalities that enable users to add annotations to a
wiki page and to specify relationships between pages based
on ontologies. In our opinion semantic wikis are far from
being usable by non-experts. Besides the e ort to learn
a special syntax to write and to annotate content, a user
has to cope with technical terms such as resource, di erent
kinds of relationships, and namespaces. In contrast semantic
tagging engines such as faviki4 exploit the well-known user
interaction procedure of tagging to annotate content. In the
background faviki calls functions of the Zemanta Semantic
API5 to retrieve suggestions for tags, e.g., Wikipedia terms.
      </p>
      <sec id="sec-8-1">
        <title>4http://www.faviki.com 5http://www.zemanta.com/</title>
        <p>Zemanta as well as OpenCalais6 are examples for services
that automatically annotate content. In Loomp we use these
services to suggest annotations to users.</p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], the author present a JavaScript API for
modifying RDFa directly on the client side and synchronizing the
changes with the server. While our OneClickAnnotator is
suitable for extensive changes of annotations of a text, this
JavaScript library is a useful supplement for smaller changes
of annotated texts.
        </p>
        <p>
          The Tabulator linked data browser [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] allows users to edit
data directly on the Web of data. However, since it requires
a Firefox plug-in in its current stage of development we see
it as a proprietary tool. In the context of OpenLink Data
Spaces7 provides a complete platform for creating a
presence on the Web of data, e.g., calendar, weblog, or
bookmark manager. However, they focus on describing the data
entities semantically while we enrich the content itself.
        </p>
        <p>
          In addition, many wrappers have been developed for
websites and relational databases to generate linked data. For
example, in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] Bizer et al. describe one of many examples
for writing an API wrappers to mash-up and interlink data
as RDF. Auer et al. present an approach to create linked
data based on crawling and processing data from Web pages
in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. As a more direct possibility to publish linked data
several tools support the mapping of relational databases to
RDF, e.g., D2RQ [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], RDB2RDF [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], and Triplify[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. All
these approaches have in common that they only support an
indirect way of creating linked data, e.g., an author cannot
directly annotate the content.
        </p>
        <p>
          On his website [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] the author presents the idea of writing
RDF data directly to non-RDF data sources. With loomp
we pursue a similar goal but from our viewpoint the data
should recide on the user's server and not on the application
server, if the user wishes so. Using the loomp plug-in an
external application can directly retrieve the user data from
his server.
        </p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>CONCLUSION AND OUTLOOK</title>
      <p>In this paper we presented a Web application for creating,
managing, and publishing semantic data, namely Loomp.
With Loomp we make an important contribution to the
success of linked data. In contrast to existing editors, our main
focus lies on an intuitive user interface, that enables every
Web user to produce semantically enriched content and to
distribute it across various media easily. Furthermore, we
reduced the system requirements to operate a Loomp server
to a minimum (LAMP system) and integrated a linked data
server which provides all public content as linked data to
increase the awareness and usage of linked data .</p>
      <p>An initial version of Loomp has recently been released
which illustrates the basic functionalities, e.g., content
management and publishing. As a major feature, we will exploit
existing web services to propose annotations automatically
which can be accepted or rejected by a user. In our future
work, we will also address the integration and the support
of third party applications such as blogs, wikis, and word
processors.</p>
      <sec id="sec-9-1">
        <title>6http://www.opencalais.com/ 7http://virtuoso.openlinksw.com/wiki/main/Main/Ods</title>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Acknowledgments</title>
      <p>This work has been partially supported by the \InnoPro
leCorporate Semantic Web" project funded by the German
Federal Ministry of Education and Research (BMBF).
6.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Adida</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Birbeck</surname>
          </string-name>
          .
          <article-title>RDFa primer { bridging the human and data webs</article-title>
          . W3C Working Group Note, Oct.
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          . Triplify. Project Website,
          <source>Retrieved January 9</source>
          ,
          <year>2009</year>
          , from http://triplify.org.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          , G. Kobilarov,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cyganiak</surname>
          </string-name>
          , and
          <string-name>
            <surname>Z. Ives.</surname>
          </string-name>
          <article-title>DBpedia: A nucleus for a web of open data</article-title>
          .
          <source>In Proceedings of ISWC/ASWC</source>
          <year>2007</year>
          , volume
          <volume>4825</volume>
          <source>of LNCS</source>
          , pages
          <volume>715</volume>
          {
          <fpage>728</fpage>
          . Springer Verlag, Nov.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietzold</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Riechert. OntoWiki - A Tool</surname>
          </string-name>
          for Social,
          <article-title>Semantic Collaboration</article-title>
          . In I. F. Cruz et al., editors,
          <source>The Semantic Web - ISWC</source>
          <year>2006</year>
          , volume
          <volume>4273</volume>
          <source>of LNCS</source>
          , pages
          <volume>736</volume>
          {
          <fpage>749</fpage>
          . Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Berners-Lee</surname>
          </string-name>
          et al.
          <article-title>Tabulator redux: Writing into the semantic web</article-title>
          .
          <source>Technical report</source>
          , ECS, University of Southampton,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cyganiak</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Gauss</surname>
          </string-name>
          .
          <article-title>The RDF Book Mashup: From Web APIs to a Web of Data</article-title>
          . In S. Auer,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Heath</surname>
          </string-name>
          , and G. A. Grimnes, editors,
          <source>SFSW</source>
          , volume
          <volume>248</volume>
          <source>of CEUR Workshop Proceedings. CEUR-WS.org</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Seaborne.</surname>
          </string-name>
          D2rq
          <article-title>- treating non-rdf databases as virtual rdf graphs</article-title>
          .
          <source>In ISWC2004 (posters)</source>
          ,
          <year>November 2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietzold</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hellmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Peklo</surname>
          </string-name>
          .
          <article-title>Using javascript rdfa widgets for model/view separation inside read/write websites</article-title>
          .
          <source>In Proceedings of the 4th Workshop on Scripting for the Semantic Web</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>P.</given-names>
            <surname>Glotz</surname>
          </string-name>
          and R. Meyer-Lucht.
          <article-title>Zeitung und Zeitschrift in der digitalen Okonomie { Delphi-Studie</article-title>
          .
          <source>Project Website, Retrieved January 10</source>
          ,
          <year>2009</year>
          , from http://www.unisg.ch/org/mcm/web.nsf/ wwwPubInhalteGer/Online Publishing Delphi-Studie.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hausenblas</surname>
          </string-name>
          . pushback
          <article-title>- Write Data Back From RDF to Non-RDF Sources</article-title>
          . http: //esw.w3.org/topic/PushBackDataToLegacySources, March
          <year>2009</year>
          .
          <source>retrieved on 3rd March</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Heath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hausenblas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cyganiak</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Hartig</surname>
          </string-name>
          .
          <article-title>How to publish linked data on the web</article-title>
          ,
          <year>October 2008</year>
          .
          <article-title>Tutorial at the ISWC2008</article-title>
          ,
          <source>retrieved on 10th February</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kro</surname>
          </string-name>
          tzsch,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vrandecic</surname>
          </string-name>
          , and
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Volkel</article-title>
          .
          <source>The Semantic Web - ISWC</source>
          <year>2006</year>
          , chapter Semantic MediaWiki, pages
          <volume>935</volume>
          {
          <fpage>942</fpage>
          . Lecture Notes in Computer Science. Springer Verlag,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Malhotra</surname>
          </string-name>
          .
          <article-title>Progress report from the rdb2rdf xg</article-title>
          . In C.
          <article-title>Bizer and A</article-title>
          . Joshi, editors,
          <source>International Semantic Web Conference (Posters &amp; Demos)</source>
          , volume
          <volume>401</volume>
          <source>of CEUR Workshop Proceedings. CEUR-WS.org</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Oldakowski</surname>
          </string-name>
          et al.
          <article-title>RAP: RDF API for PHP</article-title>
          .
          <source>In Proceedings of SFSW</source>
          <year>2005</year>
          , May
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>S.</surname>
          </string-name>
          <article-title>Scha ert. Ikewiki: A semantic wiki for collaborative knowledge management</article-title>
          .
          <source>In Proceedings of STICA'06</source>
          ,
          <string-name>
            <surname>Manchester</surname>
            ,
            <given-names>UK</given-names>
          </string-name>
          ,
          <year>June 2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>