<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Converging Web and Desktop Data with Konduit</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laura Dragan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Knud Moller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Siegfried Handschuh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oszkar Ambrus</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Trug</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Digital Enterprise Research Institute, National University of Ireland</institution>
          ,
          <addr-line>Galway</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Mandriva S.A.</institution>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we present Konduit, a desktop-based platform for visual scripting with RDF data. Based on the idea of the semantic desktop, non-technical users can create, manipulate and mash-up RDF data with Konduit, and thus generate simple applications or work ows, which are aimed to simplify their everyday work by automating repetitive tasks. The platform allows to combine data from both web and desktop and integrate it with existing desktop functionality, thus bringing us closer to a convergence of Web and desktop.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>With the Semantic Web gaining momentum, more and more structured data
becomes available online. The majority of applications that use this data today
are concerned with aspects like search and browsing. However, a greater bene t of
structured data is its potential for reuse: being able to integrate existing web data
in a work ow relieves users from the investment of creating this data themselves.
On the other hand, when it comes to working with data, users still rely on
desktop-based applications which are embedded in a familiar environment.
Webbased applications either simply do not exist, or have shortcomings in terms of
usability. They can only access web data, and do not integrate with data that
users might already have on their own desktop, let alone with other applications.
Even considering that it may be bene cial for users to publish some desktop data
online, releasing all their data on the web may raise signi cant privacy issues.
Instead, what is needed is a way of accessing structured web data from the
desktop, integrate it with existing desktop-data and applications and work with
both in a uni ed way.</p>
      <p>The Semantic Desktop through projects such as Nepomuk now opens up new
possibilities of solving this problem of integrating data and functionality from
both web and desktop. On the Semantic Desktop, data is lifted from
applicationspeci c formats to a universal format (RDF) in such a way that it can be
interlinked across application boundaries. This allows new ways of organizing data,
but also new views on and uses of arbitrary desktop data. What is more,
because desktop data is now available in a web format, it can also be interlinked and
processed together with genuine web data. While the uni ed data model makes
this scenario easier than it previously was, implementing it would ordinarily
still require an experienced developer, who would use a full-edged programming
language to create applications that manipulate and visualize RDF data. With
current tools, casual or naive users would not be able to perform such tasks.</p>
      <p>In this paper, we present an approach for mashing up RDF data, which can
originate from either the web or, through the Semantic Desktop, from arbitrary
desktop applications. While the individual components that make up our
approach are not new in themselves, we believe that the combination is new and
opens up possibilities that have not been available before.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <p>Our work is based on and in uenced by several existing technologies, such as
the Semantic Desktop, Unix pipes, scripting languages, visual programming &amp;
scripting and data ow programming. In the following we will describe these
technologies.</p>
      <p>
        Our approach assumes that we are working on a semantic desktop [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
rather than a conventional one. As discussed earlier, this means that data in
application-speci c formats has been lifted to a uniform data format, such as
RDF or, in the case of the Nepomuk project [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], to an extension such as NRL3
(in the remainder of this paper, we mean a semantic representation language
when we say RDF). Representing desktop data in a uniform format means that
it can be interlinked and processed in a uniform way across the desktop, but also
that it can be interlinked and processed with web data in the same way.
      </p>
      <p>For our application the implementation of choice of the Semantic Desktop is
Nepomuk-KDE4, developed during the Nepomuk project as part of the K
desktop environment. However, also more mainstream products, such as Spotlight
technology of Mac OS X are a step towards a uni ed view on all desktop data.</p>
      <p>The concept of pipes has been a central part of UNIX and its derivatives
since 1973, when it was introduced by M. Doug McIlroy. The basic idea of
pipes is that individual processes or programs can be chained into a sequence
by connecting them through the operating systems standard streams, so that
the stdout of one process feeds into its successor's stdin. In this way, tasks which
require the functionality from di erent applications or data from di erent sources
can elegantly be combined into a single work ow.</p>
      <p>
        Scripting languages such as Perl and Unix shell allow rapid application
development and a higher level of programming. They represent a very di erent
style of programming as compared to system programming languages like C or
Java, mainly because they are designed for \gluing" applications [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The libraries
provided by most scripting languages are highly extensible, new components
being added as the need for them arises. Being weakly typed is another de ning
characteristic of scripting languages that Konduit employs.
3 http://www.semanticdesktop.org/ontologies/nrl/ (26/02/2009)
4 http://nepomuk.kde.org (26/02/2009)
      </p>
      <p>
        As a form of end-user programming, visual programming (VP) is
targeted at non-experts who want to be able to automate simple processes and
repetitive tasks, without having to learn the complexities of a full- edged
programming language. In visual programming users construct the program not by
writing source code, but instead by arranging and linking visual representations
of components such as data sources, lters, loops, etc. In other words, \a visual
programming system is a computer system whose execution can be speci ed
without scripting" [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] | \scripting" here in the traditional sense of writing lines
of source code.
      </p>
      <p>Recently, VP has gained some popularity in the form of Yahoo Pipes5. In
allusion to UNIX Pipes, Yahoo Pipes allows the user to visually compose work ows
(or pipes) from various ready-made components. Inputs and outputs are mostly
news feed-like lists of items. Being a Web application, Yahoo Pipes is limited in
that it operates on Web data only, in formats such as RSS or Atom. Another
application that supports a wider range of (Semantic) Web data standards and
also tightly integrates with the SPARQL query language is SparqlMotion6.
Because of the simplicity and typical small-scale scope of software like Yahoo Pipes,
SparqlMotion and also Konduit, they are often being tagged with the term visual
scripting instead of VP.</p>
      <p>
        Closely related to our approach are the Semantic Web Pipes [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which
apply the Yahoo Pipes look and feel and functionality directly to Semantic Web
data. Also here, SPARQL is an integral component to de ne the functionality of
the individual building blocks. A crucial di erence between SparqlMotion and
Semantic Web Pipes on the one hand and Konduit on the other is that they
have a clear focus on Web data and do not integrate desktop data or application
functionality.
      </p>
      <p>
        The concept of designing work ows by chaining a set of components through
their inputs and outputs is related to a form of programming called data ow
programming (e.g., [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]). Unlike the more conventional paradigm of imperative
programming, a program in data ow programming does not consist of a set of
instructions which will essentially be performed in sequence, but instead of a
number of interconnected \black boxes" with prede ned inputs and outputs.
The program runs by letting the data \ ow" through the connections. As soon
as all inputs of a particular component are valid, a component is executed.
2.1
      </p>
      <sec id="sec-2-1">
        <title>Related Work</title>
        <p>
          Apart from those mentioned above, there are a number of other systems which
are related to Konduit. WebScripter [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] is an application that allows users to
create reports in a spreadsheet-like environment from distributed data in the
DAML (DARPA Agent Markup Language) format. Unlike our approach,
WebScripter is based on the now more or less extinct DAML, and o ers neither a
5 http://pipes.yahoo.com/ (26/02/2009)
6 http://composing-the-semantic-web.blogspot.com/2007/11/
        </p>
        <p>
          sparqlmotion-visual-semantic-web.html (26/02/2009)
visual composition environment nor the option to connect to desktop
functionality. Potluck [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] is a web-based platform for visually mixing structured data from
di erent sources together, even if the data does not conform to the same
vocabulary or formatting conventions. An important restriction is the fact that only
data from sites which are hosted using the Exhibit7 platform can be merged.
Potluck is geared towards data integration, and therefore does not o er any of
the work ow capabilities we implement in Konduit.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Konduit Components and Work ows</title>
      <p>With Konduit we want to allow casual users to build simple programs in
order to perform and automate everyday tasks on RDF data. Konduit provides a
collection of useful components ready for immediate use. The components o er
individual units of functionality and are represented visually as blocks. They
are connected through input and output slots, and in this way the ow of the
program is de ned. In order to keep simple the task of connecting components,
the only data that ows through the work ow is RDF. This condition insures
that each component always ful ls the minimal requirement for dealing with
its input. Obviously, components may be specialized with respect to the actual
vocabulary on which they can operate and will decide at runtime if and how it
deals with the incoming RDF. By neither allowing di erent kinds of data (e.g.,
text, numbers, lists, images, etc.), nor typing the RDF data with respect to the
vocabularies they use, we stay very close to the original UNIX pipes concept,
where data is always an untyped bytestream on the one of the standard streams
stdin or stdout, and where it is up to each process or program how to handle
it (see Fig. 1). Konduit is implemented as a desktop-based application for the
process
process
todu byte
ts stream
t
u
p
t
u
o
n
it
d
s
t
u
p
n
i
process
process
tudo byte
ts stream
t
u
p
t
u
o
n
it
d
s
t
u
p
n
i
process
process
Linux desktop environment KDE4, and is based on Plasma8. The architecture
is plugin-based, so that each component is realised as a plugin into the
Konduit platform. Technically, Konduit plugins are also so-called \Plasma applets".
Therefore designing and implementing new ones is quite straightforward (from
the point of view of a KDE developer); and although all existing Konduit plugins
have been written in Qt/C++, to write new ones can be done using the Ruby,</p>
      <sec id="sec-3-1">
        <title>7 http://simile.mit.edu/exhibit/ 8 http://plasma.kde.org/ (26/02/2009)</title>
        <p>Python or Java bindings of Qt. We expect that new plugins will be developed
by external power users, as the need for them arises. As Plasma applets, the
Konduit plugins can be loaded and used as independent applications directly on
the desktop, without being restricted to the Konduit workspace. The workspace
is not unlike a drawing canvas, on which the components can be dropped from
a lateral toolbar. On this \drawing area" the user can connect the input slots to
output slots of di erent components, move the blocks around, set their
parameters and in this way build small applications.</p>
        <p>Konduit makes use of the semantic desktop features that come as part of
Nepomuk implementation in KDE4, especially the Soprano RDF framework9.
Soprano is also used to store the saved work ows and black boxes as RDF in a
repository (with the given connections and values for con guration parameters).
3.1</p>
        <sec id="sec-3-1-1">
          <title>Components</title>
          <p>Formally, a component is de ned by the following parameters: (i) a set of RDF
input slots I, (ii) a set of RDF output slots O, (iii) a set of parameters P which
allow for user input in the work ow, (iv) a unit of functionality F , which works
on the input I and generates the output O. The parameters P in uence the
behaviour of F .</p>
          <p>De nition 1. Component = (I, O, P, F )</p>
          <p>The number of input and output slots is not xed and can be 0 or more.
Depending on the number of slots, components can be grouped in three categories:
sources, sinks, and ordinary components. Sources are components that do not
have any inputs. They supply the work ow with data. Because the data graphs
can be merged, there can be more than one source for any work ow. Typical
examples of sources are connectors to RDF stores, le (URL) input components,
or converters from other, non-RDF formats. Sinks are components that do not
have any outputs. They represent the nal point(s) of any work ow. Examples
of sink components are application adaptors, serializers ( le output components)
and visualizers. Unlike in data ow programming where a component is run as
soon as all inputs are valid, the Konduit work ows are activated from a sink
component, usually by clicking on an activation button.</p>
          <p>Ordinary components, can be further classi ed according to the kind of
functionality F they contain.</p>
          <p>{ Merger - combines the input graphs into a single output graph
{ Duplexer - duplicates the input graph to two outputs.
{ Transformer - applies a transformation on the input graph and outputs
the resulting graph.</p>
          <p>An important aspect of our approach is directly tied to the fact that all
inputs and outputs are RDF graphs. As a result, any work ow can itself become</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>9 http://soprano.sourceforge.net/ (26/02/2009)</title>
        <p>a component, meaning that work ows can be built recursively. In this way, it is
possible to create a library of specialised components (which we call blackboxes),
based on the combination of basic components. We will pick this idea up again
in Sect. 3.2.</p>
        <p>Sources. Sources are a special type of components that do not have any input
slots. There is always at least a source at the start of any work ow.</p>
        <p>There is a dedicated source component for reading data from the local
Nepomuk RDF repository. This source extracts the desktop data according to a
SPARQL construct query given as parameter. The Nepomuk source element
has a variant that is meant to help the user create the SPARQL query in a
friendlier way, by the means of a smart wizard, with autocompletion and
suggestions. Another basic source component is the le input source, which takes
a URL as a parameter. The URL can point to a le (network is transparent so
the path can be local or remote) or to a SPARQL endpoint (see Fig. 3). This
component takes as parameter the expected serialization of the graph. For
parsing it uses the parsers made available by the Soprano library. There are several</p>
        <p>Fig. 3: The three uses of the Konduit File Input Source.
components that transform non-RDF data to RDF. The literal input takes any
text given as parameter and transforms it to a RDF graph containing exactly
one triple:
&lt;http://www.konduit.org/elements/LiteralValue/data&gt;
&lt;http://www.w3.org/2000/01/rdf-schema#comment&gt;</p>
        <p>"string data"^^&lt;http://www.w3.org/2001/XMLSchema#string&gt;
The literal le input creates the same kind of triple, using as the string value
the content of the le given as parameter.</p>
        <p>Transformers. The most basic and simple transformer component is the lter
element. It changes the input graph according to a SPARQL construct query
given as parameter. The lter element can be saved with xed queries and thus
create specialized converters from one vocabulary to another. Another useful
transformer is the duplicate remover component, which as the name suggests,
outputs each unique triple from the input graph exactly once and discards all
the duplicates.</p>
        <p>Visualizers. The visualizer components display the RDF data received as input
in various forms. So far there are only two components of this type: the data
dump sink which shows the graph as quadruples in a separate window; and the
data table sink which creates tables for each class of resource found in the input
graph, each table having on each row one data for one instance in the graph.
The columns are given by the properties of the class shown in each table.
Application adaptors. Application adaptors call the functionality of an
external application or protocol with the data given in the input graph.</p>
        <p>One such adaptor is the mailer element. It takes as input several graphs of
data: one of foaf:Persons with mbox and name, one with the list of les to attach
to the sent emails, a template for the message and a subject.</p>
        <p>Another adaptor is the scripter element which passes the input RDF graph
as input to a script available on the desktop. There is no restriction regarding
the nature of the script or the language in which it is written, as long as it is
executable, it takes RDF as input and it outputs RDF as well. The serialization
for the input and output must be the same and it can be speci ed as a parameter.
3.2</p>
        <sec id="sec-3-2-1">
          <title>Work ows</title>
          <p>A work ow is de ned by specifying (i) a set of components C, (ii) a function f
de ned from the set of all the inputs of the components of C to the set of all the
outputs of the components of C and the nil output. The function f shows how
the components of C are connected. The inputs that are not connected have a
nil value of f ; the outputs that do not represent a value of f are not connected.
De nition 2. Work ow = (C, f ) where f : inputs(C) ! outputs(C) [ f nil g
Work ows can be saved and reused. Saving a work ow implies saving all the
components that have at least one connection to the work ow, as well as their
existing connections, parameters and layout. There is no restriction that the
components should be completely connected, so there can be input or output
slots that remain open. A saved work ow can be reopened and modi ed by
adding to it or removing components, or changing connection or parameters and
thus obtaining di erent work ows with minimum e ort.</p>
          <p>Even the simple work ows can have numerous components, the more complex
ones having tens of components can become too big to manage in the workspace
provided by the application. To aid the user with handling large and complex
work ows, we added modularization to Konduit. Work ows can thus be saved
as reusable components, which we call blackboxes and which are added to the
library of available elements. Blackboxes can be used afterwards in more complex
work ows. This can be done recursively as more and more complexity is added.
The inputs and outputs of blackboxes must be marked in the original work ow
by special input and output components (as illustrated in Fig. 4).
The following example illustrates what Konduit can do for the user of a semantic
desktop.</p>
          <p>John is a music enthusiast. He enjoys having his music collection organized,
and if he likes an artist he will try to nd that artist's entire discography.
Whenever he discovers a new singer he creates a le with that singer's discography and
marks which albums or songs he owns and which he does not. This task requires
usually many searches - on the web as well as on John's own computer. Some
of the common problems he runs into are: on the web the information he needs
is spread across several web pages which need to be found; on his computer the
music les are spread over several folders, and he would have to manually check
each le to mark it as owned.</p>
          <p>This example highlights a number of important aspects that our approach
addresses, and illustrates how a tool such a Konduit can be used:
{ Accessing and processing desktop data: John uses the semantic desktop
offered by Nepomuk on KDE4 so he has his music library metadata stored in
the Nepomuk repository and can therefore be processed by Konduit.
{ Accessing and processing web data: Services10 expose their data as RDF,
which means that our system can use it.
{ Merging desktop and web data: Since both kinds of data sources use a uni ed
data model, Konduit can simply mash both together.
{ Using desktop functionality: Since our approach is desktop-based, we can
easily access and integrate the functionality of arbitrary desktop applications
or run local scripts that are normally executable on the desktop (with the
restriction of taking as input and outputting RDF).</p>
          <p>Three main parts of the work ow stand out: preparation of the data found
online, preparation of the data found locally on John's desktop and the
generation of the le. For the rst two parts we create sub-work ows which we save as
blackboxes and use them in the nal work ow. Both blackboxes take as input
the name of the artist and output album and track data, one from the desktop
and the other from the web.</p>
          <p>Desktop data. To access the local music metadata we need a Nepomuk source
component. It will return the graph of all songs found in the Nepomuk repository,
with title, artist, album, track number and the URL of the le storing the song.
This graph needs to be ltered so that only the songs by the speci ed author
remain. For it we use a lter element.</p>
          <p>Web data. We use the SPARQL endpoint provided by the musicbrainz service11
to retrieve music information. To connect to it we need a le input source with
a query that takes data about artists, albums and tracks. The graph returned
by the source has to be ltered by the artist name. This is done with a lter
component that has also the function of a vocabulary converter, as it takes in
data described using the Music Ontology 12 and creates triples containing the
same data described with the Xesam ontology 13.
10 such as http://dbtune.org/musicbrainz/
11 http://dbtune.org/musicbrainz/sparql
12 http://purl.org/ontology/mo/
13 http://xesam.org/main/XesamOntology</p>
          <p>
            Running the script. The scripter will take as input a graph constructed from
the two subgraphs: one containing the data about the artist extracted from the
web and the other the data about the artist available on the desktop. Both graphs
contain xesam data. The merged outputs are rst passed through a duplicate
remover component to eliminate the redundant triples. The script takes the
resulting graph of albums and tracks for the given artist and generates a le
containing the discography. The RDF output of the script contains the path
to the generated le, and is used by a File Open component to display the
discography in the system default browser. The nal work ow is depicted in
Fig. 5 and the generated discography le in Fig. 6. A more detailed description
of the work ow, including the SPARQL queries that are used and the script can
be found at [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ]
5
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Discussion and Future Work</title>
      <p>In this section, we will discuss a number of issues related to our conceptual
approach in general, as well as to our Konduit implementation in particular.</p>
      <p>We have argued that we restrict the kind of data that can ow within a
work ow to be only RDF. By only allowing one kind of data, we keep the model
simple and elegant. However, in reality we will often want to deal with other
kinds of data (text, URLs, images, etc). At the moment, we handle these cases
through component parameters, but this solution often feels rather awkward.
We plan to study further whether adding support for other types than RDF will
justify the increase in complexity.</p>
      <p>Currently we do not have support for control ow components (loops, boolean
gates, etc). On the one hand, including such features would certainly make our
approach much more versatile and powerful and may be an interesting line of
development for the future.</p>
      <p>Some of the basic components available for Konduit require previous
knowledge of writing SPARQL queries. Since the queries given as parameters to the
source and lter elements can in uence the performance of the entire work ow,
we recognize the need for a smart query editor that is suitable for naive users.
Our solution to support end users in creating queries is based on
autocompletion, however, in order to make the system more accessible, we think it will be
necessary to introduce a di erent kind of interface, which would abstract away
from the actual syntax altogether and model the query on a higher level. Such
an interface would possibly still be of a graphical nature, but without simply
replicating the SPARQL syntax visually. Alternatively or additionally, a natural
language interface would be promising direction for further research.
6</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>We have presented an approach for enabling casual, non-technical users to build
simple applications and work ows from structured data. To simplify the building
process, we have chosen a visual scripting approach, which is inspired by software
such as Yahoo Pipes. We expect that users will bene t mostly from our approach
if they operate in a Semantic Desktop-like environment, where they will have
access to the data and functionality they are used to and have to work with on
a daily basis. However, our approach and implementation also enable users to
integrate data and functionality from their desktops with data from the Web,
thus representing a step towards the convergence of those two domains.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The work presented in this paper was supported (in part) by the L on project supported
by Science Foundation Ireland under Grant No. SFI/02/CE1/I131 and (in part) by the
European project NEPOMUK No FP6-027705.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>S.</given-names>
            <surname>Decker</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Frank</surname>
          </string-name>
          .
          <article-title>The networked semantic desktop</article-title>
          . In C. Bussler,
          <string-name>
            <given-names>S.</given-names>
            <surname>Decker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwabe</surname>
          </string-name>
          , and O. Pastor, editors,
          <source>WWW Workshop on Application Design</source>
          ,
          <article-title>Development and Implementation Issues in the Semantic Web</article-title>
          , May
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>L.</given-names>
            <surname>Dragan</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Mo</surname>
          </string-name>
          <article-title>ller</article-title>
          .
          <source>Creating discographies with Konduit</source>
          ,
          <year>2009</year>
          . http:// smile.deri.ie/konduit/discography.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>T.</given-names>
            <surname>Groza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Handschuh</surname>
          </string-name>
          , K. Moller, G. Grimnes,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sauermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Minack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mesnage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jazayeri</surname>
          </string-name>
          , G. Reif, and
          <string-name>
            <given-names>R.</given-names>
            <surname>Gudjonsdottir</surname>
          </string-name>
          .
          <article-title>The NEPOMUK project | on the way to the social semantic desktop</article-title>
          . In T. Pellegrini and S. Scha ert, editors,
          <source>Proceedings of I-Semantics' 07</source>
          , pages pp.
          <volume>201</volume>
          {
          <fpage>211</fpage>
          .
          <string-name>
            <surname>JUCS</surname>
          </string-name>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>D. F.</given-names>
            <surname>Huynh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and D. R.</given-names>
            <surname>Karger</surname>
          </string-name>
          . Potluck:
          <article-title>Semi-ontology alignment for casual users</article-title>
          . In K. Aberer, K.-S. Choi,
          <string-name>
            <given-names>N.</given-names>
            <surname>Noy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Allemang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nixon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Golbeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maynard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mizoguchi</surname>
          </string-name>
          , G. Schreiber, and P. Cudre-Maroux, editors,
          <source>6th International Semantic Web Conference and 2nd Asian Semantic Web Conference, ISWC+ASWC2007</source>
          , Busan, Korea, volume
          <volume>4825</volume>
          <source>of LNCS</source>
          , pages
          <volume>903</volume>
          {
          <fpage>910</fpage>
          , Heidelberg,
          <year>November 2007</year>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>T.</given-names>
            <surname>Menzies</surname>
          </string-name>
          .
          <article-title>Visual programming, knowledge engineering, and software engineering</article-title>
          .
          <source>In Proc. 8th Int. Conf. Software Engineering and Knowledge Engineering</source>
          , SEKE. ACM Press,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>C.</given-names>
            <surname>Morbidoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Phuoc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Polleres</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Tummarello</surname>
          </string-name>
          .
          <article-title>Previewing semantic web pipes</article-title>
          . In S. Bechhofer, editor,
          <source>Proceedings of the 5th European Semantic Web Conference (ESWC2008)</source>
          , Tenerife, Spain, volume
          <volume>5021</volume>
          <source>of LNCS</source>
          , pages
          <volume>843</volume>
          {
          <fpage>848</fpage>
          . Springer,
          <year>June 2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>L.</given-names>
            <surname>Orman</surname>
          </string-name>
          .
          <article-title>A multilevel design architecture for decision support systems</article-title>
          .
          <source>SIGMIS Database</source>
          ,
          <volume>15</volume>
          (
          <issue>3</issue>
          ):3{
          <fpage>10</fpage>
          ,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>J. K.</given-names>
            <surname>Ousterhout</surname>
          </string-name>
          . Scripting:
          <article-title>Higher Level Programming for the 21st Century</article-title>
          . In IEEE Computer Magazine,
          <year>March 1998</year>
          . http://home.pacbell.net/ouster/ scripting.html.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>B.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Frank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Szekely</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Neches</surname>
          </string-name>
          , and
          <string-name>
            <surname>J. Lopez.</surname>
          </string-name>
          <article-title>WebScripter: Grass-roots ontology alignment via end-user report creation</article-title>
          . In D. Fensel,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sycara</surname>
          </string-name>
          , and J. Mylopoulos, editors, 2nd International Semantic Web Conference, ISWC2003, Sanibel Island, FL, USA, volume
          <volume>2870</volume>
          <source>of LNCS</source>
          , pages
          <volume>676</volume>
          {
          <fpage>689</fpage>
          , Heidelberg,
          <year>November 2003</year>
          . Springer.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>