<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Who the FOAF knows Alice? A needed step towards Semantic Web Pipes?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Christian Morbidoni</string-name>
          <email>christian@deit.univpm.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Axel Polleres</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giovanni Tummarello</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DERI Galway, National University of Ireland</institution>
          ,
          <addr-line>Galway</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>SeMedia Group, Universita' Politecnica delle Marche</institution>
          ,
          <addr-line>Ancona</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we take a view from the bottom to RDF(S) reasoning. We discuss some issues and requirements on reasoning towards effectively building Semantic Web Pipes, aggregating RDF data from various distributed sources. If we leave out complex description logics reasoning and restrict ourselves to the RDF world, it turns out that some problems, in particular how to deal with contradicting RDF statements, do not yet find their proper solutions within the current Semantic Web Stack. Besides theoretical solutions which involve full DL reasoning, we believe that more practical and probably more scalable solutions are conceivable one of which we discuss in this paper, namely, expressing and resolving conflicting RDF statements by means of a specialized RDF merge procedure. We implemented this conflict-resolving merge procedure in the DBin system.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Publishing RDF files on the Web is bound to become more and more a way to state
facts that are asserted or believed to be true by the producer of the source itself.
DBpedia [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], for example, publishes a large collection of such facts by extracting them
from the collective works of the Wikipedia communities. FOAF [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] files are personal
RDF models which are created by individuals to state facts about, typically, themselves.
Nothing, however, prevents them in general to state facts about other entities and this
is in fact a fundamental feature of the “Semantic Web”, everyone is allowed to “state”
about, virtually, anything. In some cases one might even be inclined to trust third-party
information more than self-descriptions, for instance comments about an enterprise or
a product one considers to buy. The sum of RDF statements, currently known to be
HTTP retrievable, is now in the order of billions with millions of individual HTTP
locations (sources) hosted on tens of thousands of web sites, rapidly increasing. Along
with this increased take-up of RDF on the Web, upcoming query language standards
like SPARQL [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], or RDF search engines like SWSE [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] or Sindice [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] shall finally
enable structured querying over Web data. Unfortunately however, there is no clear and
established model on how to use such amounts of information coming from many
diverse sources. Using any available source directly, e.g. crawling/downloading and using
? This work has been partially supported by the European FP6 project inContext (IST-034718),
by Science Foundation Ireland under the Lion project (SFI/02/CE1/I131), and by the European
project DISCOVERY(ECP-2005-CULT-038206).
it might not be advisable or sufficient. More information might be needed such as, for
example, patches to the original data. Other cases include when a source is in general
considered useful but is known to contain statements which need to be removed, e.g.
outdated facts (a “negative” information patch is needed), or subjective assertions which
can be accepted or not depending on who is reading the data. In general, getting
information from the Web into one’s own semantic client or system is very likely to require,
or at least benefit, from a series of custom steps to be performed involving a number of
external or internal sources before having a version which can be used directly. Also,
facing the sheer amount of data to be expected, more complex tasks such as ontological
inferences or complex query answering will profit from such preprocessing which only
preserves relevant and useful information. In this paper, we focus on one facet of such
preprocessing, namely allowing to retract unwanted RDF data, and present a practical
solution for this problem.
      </p>
      <p>Along these lines, the remainder of this paper discusses the following issues: In
Section 2 we introduce the idea of “Semantic Web Pipes”, i.e. how a new breed of
applications composed of small building blocks to aggregate, filter and preprocess junks
of RDF data could contribute to make the Semantic Web real. In such aggregations
from arbitrary sources on the Web we will naturally have to deal with contradicting
statements. We will have a look on how current Semantic Web languages could
support the expression of such contradicting/negative statements and how the resolution of
conflicts is being addressed in Section 3. Actually, we will come to the conclusion that
current languages do not properly address this problem so far. Based on this
observation and in an attempt to address the problem with a technique we already successfully
applied in a related domain (for synchronising RDF resources), we propose to express
revocations of RDF statements by means of so called RDF MSG hashes. We discuss
this approach and its implications in Section 4. A prototype implemented on top of the
DBin 2.0 system is briefly described in Section 5, before we conclude with an outlook
to future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Towards Semantic Web Pipes</title>
      <p>Yahoo Web Pipes1 are a recent development which has certainly had already a big
impact to the latest wave of web development by showing how customized services and
information streams can be implemented by sequentially processing and interleaving
existing feeds and services. With Pipes, resources, e.g. RSS feeds, can be merged with
one another, filtered according to specific pipe rules, used as an input for an on-line
restful API to get yet more results, etc. Most interestingly, this all happens without the
original providers of informations and services had to change anything on their side or
reach any form of agreement if not to use HTTP and possibly RSS. Current mashup
models like Yahoo Pipes are however limited to “streams” of information (e.g. news
feeds) or single, simple API invocations on a remote site (e.g. a search for a specific
word, or, more general, one-shot Web service invocations).</p>
      <p>
        In the same way as a Web Pipe enables an existing Web information stream to be
customized, extended and reused for a specific purpose as decided by the pipe creator,
1 http://pipes.yahoo.com/
we see a very clear interest in trying to use this model to address the issue we highlighted
before: how to make use of web published RDF sources? We might for example want
to use DBpedia knowledge about a topic, but yet sum it with the knowledge coming
from certain specific sites and correcting it by eliminating some statements we believe
to be false. The Web Pipe model teaches us that we do not really want to download
the DBpedia RDF dump, and operate directly on a local version of it, e.g. by adding
and subtracting triples in a complex SPARQL query (see also the following Section).
By doing so once and in a static manner, we would create a customized knowledge
base at the beginning but would miss any new information that any of the composing
sources might later add. A much more dynamic and useful model would therefore be
a “Semantic Web Pipes” model where an RDF piping engine can on the fly and on
demand work out the customized composition and processing of a set of Web sources
according to our specific needs. In case where information needs to be simply added, the
RDF semantics [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] specifies how to merge two models: the piping engine has therefore
to do not much more than downloading the files and putting them together in the same
store, standardizing apart blank nodes. But what to do when information needs to be
patched in a traditional sense, i.e. in part both removed and added?
      </p>
      <p>
        As a use case, let us take the case where Bob is stating that Charles knows Alice
in his FOAF [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] file. Alice has a questionable reputation, and Charles, clearly, has no
control on Bob’s FOAF file. Clearly, a minimal requirement on distributed metadata
is the ability to counter such false statements, thus giving Charles a way to state in
his FOAF file a simple and unambiguous statement: “I don’t know Alice”. We aim to
provide a simple and minimalistic solution to this problem, thus avoiding unnecessarily
complex reasoning.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Related Works: Expressing Negative RDF Statements</title>
      <p>First, we note that neither RDF nor RDF Schema provide means to make negative
statements such as “Charles doesn’t foaf:know Alice”, see last statement in Figure 1(b).
@prefix : &lt;http://examp.org/ bob#&gt; @prefix : &lt;http://ex.org/ charles#&gt;
@prefix foaf: &lt;http://xmlns.com/foaf/0.1/&gt; @prefix foaf: &lt;http://xmlns.com/foaf/0.1/&gt;
:me foaf:name ‘‘Bob’’. @prefix rdf: &lt;http://www...rdf-syntax-ns#&gt;
:me foaf:knows &lt;http://alice.exa.org/i&gt; . :me rdf:type foaf:Person;
:me foaf:knows &lt;http://ex.org/c˜harles#me&gt;. foaf:name "Charles".
&lt;http://ex.org/ charles#me&gt; foaf:knows :me foaf:knows &lt;http://examp.org/b˜ob#me&gt;.</p>
      <p>&lt;http://alice.example.org/i&gt;. :me foaf:knows &lt;http://alice.exa.org/i&gt;.
... ...</p>
      <p>(a) Bob’s FOAF file</p>
      <p>(b) Charles’ FOAF file</p>
      <p>The semantics of RDF(S) is purely monotonic and described in terms of positive
inference rules, so even if Charles added instead a new statement
:me
myfoaf:doesntknow</p>
      <p>&lt;http://alice.exa.org/i&gt; .</p>
      <p>hewouldnotbeabletostatethatstatementswiththepropertymyfoaf:doesntknow
should single out2 foaf:knows statements.</p>
      <p>
        N3
OWL
Tim Berners-Lee’s Notation 3 (N3) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] provides to some extent means to express what
we are looking for by the ability to declare falsehood over reified statements which
would be written as:
{ :me foaf:knows &lt;http://alice.exa.org/i&gt; } a n3:falsehood .
Nonetheless, this solution is somewhat unsatisfactory, due to the lack of formal
semantics for N3; N3’s operational semantics is mainly defined in terms of its implementation
cwm3 only.
      </p>
      <p>The falsehood of Charles knowing Alice can be expressed in OWL, however in a pretty
contrived way, as follows (for the sake of brevity we use DL notation here, the reader
might translate this to OWL syntax straightforwardly):</p>
      <p>
        {charles} ∈ forall.foaf:knows¬{alice}
Reasoning with such statements firstly involves OWL reasoning with nominals, which
most DL reasoners are not particularly good at, and secondly does not buy us too much,
asthesimplemergeofthisDLstatementwiththeinformationinBob’sFOAFfilewould
just generate a contradiction, invalidating all, even the useful answers. Para-consistent
reasoning on top of OWL, such as for instance proposed in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and related approaches,
solve this problem of classical inference, but still requiring full OWL DL reasoning.
SPARQL
Finally,morealongthePipesidea,onecouldasanaivesolution,deployanoff-the-shelf
SPARQLengineandfilterBob’sFOAFfilebyaquery,leavingjustthecleanstatements.
Imagine that Charles stores his unwanted statements in the RDF Web source &lt;http:
//ex.org/˜charles/badstatements.rdf&gt;, then such a query filtering the
information from merging Bob’s and Charles’ FOAF files could look as follows:
2 In fact, we mean here overriding instead of simply contradicting in the pure logical sense.
3 http://www.w3.org/2000/10/swap/doc/cwm
However, simply putting the bad information in a separate file is not a proper solution
for the scenario we outlined, as it is not clear how a Crawler stumbling over &lt;http://
ex.org/˜charles/badstatements.rdf&gt; should disambiguate this data from
valid RDF information. Rather, we would need to reify the negative statements using
for instance the N3 version outlined before, or the “native” RDF reification vocabulary4
which would – besides blowing up metadata by unhandy reified statements – further
complicate SPARQL querying of that Data5 to filter out the “good” data.
      </p>
      <p>In the following, we will sketch a more practical solution to the problem, exploiting
previous work on Minimum Self Contained Graphs.</p>
    </sec>
    <sec id="sec-4">
      <title>4 Introducing “Negative Statements” using MSG Hashes</title>
      <p>
        Any RDF graph may be viewed as set of triples. Triple level processing of distributed
RDF files, particularly identifying the same RDF graphs, is made very complex by
the existence of blank nodes. For this reason, the RDFSync algorithm, which we
presented in previous work, introduced the notion of Minimum Self Contained Graph
(MSGs) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Simply said, an MSG is constructed starting from a triple and
collecting, for each blank node in it, all the other triples attached to these until no more blank
nodes are involved. Such “closure” makes sure that a graph can be recomposed at a
different location simply by merging all the MSGs by which it is composed, even if
these are transferred one at a time. As MSGs are stand-alone RDF graphs, they can be
processed with algorithms such as canonical serialization.We use an implementation of
the algorithm described in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], which is part of the RDFContextTools Java library6 to
obtain a canonical string representing the MSG and then we hash it to an appropriate
number of bits to reasonably avoid collisions. This hash acts as a unique identifier for
the MSG with the fundamental property of being content based, which implies that two
remote peers would derive the same hash-ID for the same MSGs in their Databases.
Each graph can be therefore treated as a set of digital hashes each one representing
an MSGs. In the context of the problem addressed in the present work, we use such
digital hashes to refer to the MSG itself, ie. the finest granularity at which we allow to
revoke RDF statements is at the level of MSGs. The hash function we use for MSGs
takes the form of a literal encoding the 16 bytes of the MD5 hash of the canonical
graph serialization mentioned above. Stating that an MSG is false/revoked is therefore
as easy as stating a single triple where the subject is the source which revokes
another MSG, the predicate is a designated predicate (we suggest for instance the URI
&lt;http://sw.deri.org/09/2007/states not&gt;) and the 16 bytes literal
containing the MSG hash as an object, so the negative statement could be made directly
within Charles’ FOAF file or in a separate file as follows:
_:a
&lt;http://sw.deri.org/09/2007/states_not&gt;
      </p>
      <p>"HASH_OF_:ME_FOAF:KNOWS_ALICE"ˆˆmsg:Hashsum .</p>
      <p>Storing MSG hashes instead of reifying statements has (except saving storage space)
some other interesting implications: This solution works conceptually well as it takes
4 Using rdf:Statement,rdf:subject, rdf:predicate,rdf:object
5 Note that, in the FILTER query, we exploit the admittedly awkward way to model set difference
in SPARQL which as such might already not be considered intuitive unanimously.
6 http://www.dbin.org/RDFContextTools.php
care even of the cases where one wants to deny statements which involve blank nodes.
This would not be possible using reification due to the arising ambiguity. Digital hashes
over MSGs, which are agnostic about blank node IDs, avoid this problem.</p>
      <p>A drawback of the solution to quasi “encode” the negative statements in MSG
hashes which in fact possibly turns out to be a feature in certain use cases, is that the
negated statements are not clearly “readable”, e.g. by direct inspection of the RDF file.
This can be considered a feature rather than a bug for instance when one cares that
denied statements are not to be known by third-parties upfront. 7</p>
      <p>If, on the contrary, the denied statements should be made legible, one could think
of adding auxiliary statements for this purposes (such as the above-mentioned reified
N3 statements, or using agreed complementary predicate URIs modified, e.g. to adding
“not:” in front as part of the URI or as a designated URI Scheme).</p>
      <p>Note that single statements in MSGs cannot be revoked, but only whole sets of
triples “surrounding” a particular blank node. So, if we imagine Charles would be
revoking the MSG hash for
&lt;http://ex.org/˜charles/foaf.rdf#me&gt;
_:a foaf:name ‘‘Alice’’.
.
that would not have any effect if Bob had stated for instance:
foaf:knows _:a .
&lt;http://ex.org/˜charles/foaf.rdf#me&gt; foaf:knows _:a .</p>
      <p>_:a foaf:name ‘‘Alice’’; foaf:homepage &lt;http://alice.exa.org/&gt; .
in his graph.</p>
      <p>Decomposing a graph into MSGs and calculating MSG hashes might be
computationally expensive if the graph is big, contains a large number of bnodes, and/or highly
connected bnodes. As there is no way to retrieve an MSG starting from its hash, if not
decomposing the graph into MSGs and computing the hashes to find a match, the
operation of applying revocations might be time consuming. To deal whit this issue, we
could add additional information to revocations, namely one extra statement pointing to
one, randomly chosen URI involved in the original MSG. Such an extended revocation
could look as follows:
_:a
_:a
&lt;http://sw.deri.org/09/2007/states_not&gt;</p>
      <p>"HASH_OF_:ME_FOAF:KNOWS_ALICE"ˆˆmsg:Hashsum .
&lt;http://sw.deri.org/09/2007/involvedResource&gt;</p>
      <p>&lt;http://alice.exa.org/i&gt; .</p>
      <p>
        When applying such a revocation, we only need to calculate the hashes of those MSGs
which – as a sufficient condition – contain at least one statement involving the chosen
URIs for revoked MSGs (in this case &lt;http://alice.exa.org/i&gt;), thus
avoiding a complete graph decomposition. Another way to go might be to do a complete
MSG decomposition once, when the graph is originally loaded, and to keep an index
of MSG hashes to original triples. Such initial computational effort would however
result in faster operations for repeated pipe calculation. Furthermore we notice that MSG
decomposition might be needed anyway for other purposes, for example to perform
remote RDF synchronization [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
7 Although, by some additional machinery particular negated statements could be revealed quite
easily in our current approach.
      </p>
    </sec>
    <sec id="sec-5">
      <title>A Simple Semantic Web Pipe Execution Engine: Description and Implementation</title>
      <p>
        Having explained the idea to encapsulate negative statements in MSG hashes and its
possible benefits, we have implemented a first prototypical Semantic Web Pipe engine
at the heart of the DBin 2.0 Semantic Web client and authoring tool, which we conceive
to be the basis of an effective Semantic Web application middle-ware. While DBin
0.x [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] based on a P2P infrastructure where information “flows” across peers, DBin
2.0 simply provides the user with a more controlled way to define the order and the
location of the sources to import and then “executes” the pipe to generate a final RDF
base which is then browsed and queried.
      </p>
      <p>For our simple prototype, we exploit this order in evaluating RDF statements to
be overridden: In the DBin piping engine, RDF sources can be either local or remote.
These are ordered in a stack according to the priority selected by the user. At execution
stage, a new empty triplestore is created which will contain the graph resulting from the
pipe, let us call it ’T ’. The sources are then processed one by one, from the one with
the lowest priority to the one with the highest priority.8 Naming the currently processed
graph ’G’, the “ordered merge” procedure is the following:
1. G is cleaned by any negative MSG that overwrites a positive MSG in G (this means
that if G expresses “X” and “not X” we delete both the assertions);
2. The content of G is added to T ;
3. Negative statements are “applied”, i.e., if positive statements exist in T
corresponding to statements revoked in G, the lower priority positive statements are removed
(this step is the same of the first one except that it is applied to the resulting graph
T );
4. Any remaining revocations are dropped, as they must not have effects on the higher
priority graphs considered in next cycles.</p>
      <p>Once this ordered, conflict-resolving “merge” procedure has been performed for all
the RDF sources, T contains the final RDF model and DBin applies RDFS reasoning
on it. We remark that the result in absence of negated statements tantamounts to exactly
the common RDF merge.</p>
      <p>Clearly, by handling conflict resolution at the RDF merge level, and applying RDFS
reasoning only at the last step many issues are solved in a simple, intuitive and, at the
same time, efficient manner. By removing at each step any remaining negative statement
we opt for a “non symmetric” approach where positive statements are somehow
considered more important and persistent than “negative” ones. Moreover, the remaining
RDF set is clearly consistent (being simple RDF).</p>
      <p>We note however, that there could also be possibly problematic corner cases. For
instance, imagine that Bob sneaks in the unwanted statement about Alice as follows:
&lt;http://ex.org/˜charles#me&gt;</p>
      <p>myfoaf:likes &lt;http://alice.example.org/i&gt;.
myfoaf:likes rdfs:subPropertyOf foaf:knows.
8 In the current implementation, the priorities are implicitly given through a simple sequence of
sources which is processed one by one and priority is thus totally ordered.</p>
      <p>In this disguise, even if Bob’s FOAF data is given lower priority than Charles’ FOAF
file, the unwanted statement would survive the conflict resolution during our ordered
merge, since we do not do RDFS inference in this process.</p>
      <p>We are currently, investigating repairs to our approach which remedy this situation,
e.g. by labeling inferred triples with the priority of the lowest statement contributing
to their inference and, in a recursive process removing conflicting inferred triples in a
post processing step. Unfortunately, we conjecture that finding this lowest statement is,
in the general intractable9, but we hope that an approximative solution, which at least
guarantees that only overall sound triples are inferred might be achievable.</p>
      <p>
        Another drawback of the current approach is that the priority order among
considered RDF sources has to be given upfront as user input to DBin, which might not be a
problem for smaller scale pipe examples, but be undesirable as the number of known
sources grow to large scale. Trust negotiation policies, see e.g. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], encoded directly as
RDFstatements within the sources could help to assess priorities among RDF sources
as we require them directly from RDF data in those resources.
      </p>
      <p>Finally we notice that, in some cases, the end user might want to have more options
than simply putting the considered sources in a total order, i.e. the pipe being a strict
sequence of sources. To allow more flexible handling of overriding statements, allowing
to consider multiple sources at the same priority we are working to add support for a
partial rather than a total order of sources. Here, several options could be considered,
a “cautious” solution, where any source may revoke information stated by any of the
other sources “at the same level” or a “brave ” solution where revocations only apply to
stements made by strictly lower sources.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusions and future works</title>
      <p>We outlined in the present work a practical solution to add negative statements to RDF
without generating overall logical inconsistency. Even leaving aside full OWL
inference, we believe that being able to override RDF statements based on user priorities on
which Web resources are more or less trustworthy, is a crucial feature in Semantic Web
applications. In this paper we first analyzed how negative statements can at all be
expressed in current Semantic Web languages and came to the conclusion these languages
do not properly address this problem, not providing means to override statements in a
user defined priority order among RDF sources on the Web. Based on this observation,
we presented a practical solution to the problem which is implemented on top of the
DBin 2.0 system.</p>
      <p>
        Our general ideas are based on the assumption that we believe only partially in Web
scale DL reasoning, i.e. handling complete OWL inferencing, to be feasible in the near
future. Our approach is a more practical one dealing with the increasing number of RDF
data out there in an effective and arguably feasible manner. Negative statements treated
in this work, which is still in a preliminary stage, are a first example of practical
necessities we plan to address when effectively and efficiently processing Semantic Web
data for useful Semantic Web applications in the spirit of “Semantic Web Pipes”. In this
9 A concrete algorithm and complexity studies for such an algorithm are still on our agenda
sense, this work is conceived to spark discussions for more practical solutions towards
making the Semantic Web real, which might also raise controversy among “purists” in
terms of what the term “Semantic Web Reasoning” comprises and what not. More
examples of issues we want handle in practical implementations include linking RDF data
by adding views (see also [6, Section 2.10]), possibly involving scoped negation [
        <xref ref-type="bibr" rid="ref11 ref12">12,
11</xref>
        ] and evaluate scalability of such extensions in practical scenarios.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>S.</given-names>
            <surname>Auer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          , G. Kobilarov,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Cyganiak</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ives</surname>
          </string-name>
          .
          <article-title>Dbpedia: A nucleus for a web of open data</article-title>
          .
          <source>In 6th Int'l Semantic Web Conference</source>
          , Busan, Korea, Nov.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>T.</surname>
          </string-name>
          Berners-Lee.
          <article-title>Notation 3</article-title>
          , since
          <year>1998</year>
          . Available at http://www.w3.org/ DesignIssues/Notation3.html.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Bonatti</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Olmedilla</surname>
          </string-name>
          .
          <article-title>Rule-based policy representation and reasoning for the semantic web</article-title>
          .
          <source>In Reasoning Web - Third International Summer School</source>
          , pages
          <fpage>240</fpage>
          -
          <lpage>268</lpage>
          , Dresden, Germany, Sept.
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Carroll</surname>
          </string-name>
          .
          <article-title>Signing rdf graphs</article-title>
          .
          <source>In The Semantic Web - ISWC</source>
          <year>2003</year>
          , Second International Semantic Web Conference, pages
          <fpage>369</fpage>
          -
          <lpage>384</lpage>
          ,
          <string-name>
            <surname>Sanibel</surname>
            <given-names>Island</given-names>
          </string-name>
          , FL, USA, Oct.
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>D.</given-names>
            <surname>Brickley</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <article-title>Friend of a Friend (FOAF) Vocabulary Specification 0.9</article-title>
          .
          <string-name>
            <given-names>Namespace</given-names>
            <surname>Document</surname>
          </string-name>
          , May
          <year>2007</year>
          , available at http://xmlns.com/foaf/spec/ 20070524.html, .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>A.</given-names>
            <surname>Ginsberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hirtle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>McCabe</surname>
          </string-name>
          , and P. Patranjan (eds.).
          <source>RIF Core Design. W3C Working Draft 10 July</source>
          <year>2006</year>
          , available at http://www.w3.org/TR/2006/ WD-rif-ucr-
          <volume>20060710</volume>
          / .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>A.</given-names>
            <surname>Harth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Umbrich</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Decker</surname>
          </string-name>
          .
          <article-title>Multicrawler: A pipelined architecture for crawling and indexing semantic web data</article-title>
          .
          <source>In 5th International Semantic Web Conference</source>
          , Athens, GA, USA, Nov.
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>P.</given-names>
            <surname>Hayes</surname>
          </string-name>
          . RDF semantics.
          <source>W3C Recommendation</source>
          ,
          <year>February 2004</year>
          , available at http: //www.w3.org/TR/rdf-mt/ .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. van Harmelen</surname>
          </string-name>
          ,
          <article-title>and A. ten Teije. Reasoning with inconsistent ontologies</article-title>
          .
          <source>In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI'05)</source>
          , Edinburgh, Scotland,
          <year>August 2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>M. Nucci</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Morbidoni</surname>
            , and
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Tummarello</surname>
          </string-name>
          .
          <article-title>Enabling semantic web communities with dbin: an overview</article-title>
          .
          <source>In ISWC2006 Semantic Web challenge</source>
          , Athens, GA, USA,
          <year>2006</year>
          . Finalist.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>A.</given-names>
            <surname>Polleres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Feier</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Harth</surname>
          </string-name>
          .
          <article-title>Rules with contextually scoped negation</article-title>
          .
          <source>In 3rd European Semantic Web Conference (ESWC2006)</source>
          , volume
          <volume>4011</volume>
          of Lecture Notes in Computer Science, Budva, Montenegro, June 2006. Springer.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>A.</given-names>
            <surname>Polleres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Scharffe</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Schindlauer</surname>
          </string-name>
          . SPARQL+
          <article-title>+ for mapping between RDF vocabularies</article-title>
          .
          <source>In 6th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE</source>
          <year>2007</year>
          ), Vilamoura, Algarve, Portugal, Nov.
          <year>2007</year>
          . To appear.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. E.
          <string-name>
            <surname>Prud</surname>
          </string-name>
          <article-title>'hommeaux and A</article-title>
          . Seaborne (eds.).
          <article-title>SPARQL Query Language for RDF. W3C Candidate Recommendation</article-title>
          ,
          <year>June 2007</year>
          , available at http://www.w3.org/TR/2007/ CR-rdf
          <string-name>
            <surname>-</surname>
          </string-name>
          sparql-query-
          <volume>20070614</volume>
          / .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. G. Tummarello,
          <string-name>
            <given-names>R.</given-names>
            <surname>Delbru</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. Oren. Sindice.</surname>
          </string-name>
          <article-title>com: Weaving the open linked data</article-title>
          .
          <source>In Proceedings of the International Semantic Web Conference (ISWC)</source>
          ,
          <year>Nov</year>
          .
          <year>2007</year>
          . To appear.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. G. Tummarello,
          <string-name>
            <given-names>C.</given-names>
            <surname>Morbidoni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Puliti</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Piazza</surname>
          </string-name>
          .
          <article-title>Signing individual fragments of an RDF graph</article-title>
          .
          <source>In Special interest tracks and posters of the 14th international conference on World Wide Web, Chiba, Japan</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>