<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Preserving Linked Data on the Semantic Web by the application of Link Integrity techniques from Hypermedia</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Rob</forename><surname>Vesse</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electronics &amp; Computer Science</orgName>
								<orgName type="laboratory">Intelligence, Agents &amp; Multimedia Group</orgName>
								<orgName type="institution">University of Southampton Southampton</orgName>
								<address>
									<postCode>SO17 1BJ</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Wendy</forename><surname>Hall</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electronics &amp; Computer Science</orgName>
								<orgName type="laboratory">Intelligence, Agents &amp; Multimedia Group</orgName>
								<orgName type="institution">University of Southampton Southampton</orgName>
								<address>
									<postCode>SO17 1BJ</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Leslie</forename><surname>Carr</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Electronics &amp; Computer Science</orgName>
								<orgName type="laboratory">Intelligence, Agents &amp; Multimedia Group</orgName>
								<orgName type="institution">University of Southampton Southampton</orgName>
								<address>
									<postCode>SO17 1BJ</postCode>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Preserving Linked Data on the Semantic Web by the application of Link Integrity techniques from Hypermedia</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">402F3748B19C1F494C68C3115430A928</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T17:52+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Experimentation</term>
					<term>Reliability</term>
					<term>Design Semantic Web</term>
					<term>Linked Data</term>
					<term>Link Integrity</term>
					<term>Preservation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>As the Web of Linked Data expands it will become increasingly important to preserve data and links such that the data remains useful. In this work we present a method for locating linked data to preserve which functions even when the URI the user wishes to preserve does not resolve (i.e. is broken/not RDF) and an application for monitoring and preserving the data. This work is based upon the principle of adapting ideas from hypermedia link integrity in order to apply them to the Semantic Web.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>The Web of Linked Data is characterised by the interlinking between disparate heterogeneous data sources and the fact that the links between the data sources are one of the primary mechanisms for navigating through this data space. Since links are essential to the Web of Linked Data we believe that it is important to have mechanisms in place to maintain link integrity. The aim of link integrity is to ensure that a link works correctly in that traversing the link takes you to a resource and that as far as possible the resource is the one intended by the provider of the link. On a larger scale, link integrity deals with the overall integrity of interlinked datasets such as documents within a Content Management System (CMS) or the linked data sets available on the Semantic Web. Therefore link integrity is one way of ensuring data integrity within the overall system which in our use case is linked datasets.</p><p>Link integrity is an existing and well known problem from hypermedia where there were two problems to be dealt with -dangling links and the editing problem. Dangling links are the most well known problem and are regularly experienced by users on the Web as they find themselves presented with an HTTP error as the link they followed pointed to a resource which cannot be retrieved. The editing problem refers to the situation in which the content at the end of 1995 paper <ref type="bibr" target="#b16">[16]</ref> provide examples of link integrity in open hypermedia. The widespread growth of the World Wide Web <ref type="bibr" target="#b5">[5]</ref> in the mid-1990s led to some new research but as search engines became commonplace towards the end of the decade research interest dwindled. It was perceived that users did not care sufficiently to warrant research into the problem as they could locate missing resources effectively using search engines, in addition the scale of the Web by that time was simply too vast for many proposed solutions to handle. Davis's survey <ref type="bibr" target="#b10">[10]</ref> provides a good overview of the state of this research as of the end of the 1990s. Another reason for the decline in research was that the fact that links could fail was one of the reasons the Web was able to expand as fast as it did since it didn't matter if links failed and produced the familiar HTTP 404 error; users were able to publish content without worrying about whether their links to external content were valid.</p><p>Ashman's 2000 paper <ref type="bibr" target="#b4">[4]</ref> which discusses link integrity with particular reference to electronic document archives provides both a useful survey of existing work and describes a key motivation for ongoing research. As more document collections were translated into digital forms and placed onto intranets people once again started to be concerned about link integrity. Users wanted assurances that links into the document archives would work consistently and ideally links out of the archives would work correctly as well since it may not be possible to alter the archived documents without invalidating the integrity of the archive.</p><p>In this vein Veiga and Ferreira <ref type="bibr" target="#b24">[24,</ref><ref type="bibr" target="#b25">25]</ref> discuss the possibility of turning the Web into an effective knowledge repository by use of replication and versioning. Their work follows on from earlier work such as Moreau &amp; Gray's <ref type="bibr" target="#b19">[19]</ref> which proposed limited use of replication and versioning but had significant reliance on author and user involvement in the process. In Veiga &amp; Ferreira's work there is no requirement for author involvement in the process, only the end user need use a browser plugin to indicate the content they wish to replicate and preserve. Their results showed that the user could preserve the sections of the Web they were interested in with no perceivable performance impact -on average there was only a 12ms increase in retrieval time for resources. In Section 3 we discuss using an approach of this kind for the Semantic Web.</p><p>Phelps &amp; Wilensky introduced the concept of lexical signatures for Web pages in their Robust Hyperlinks paper <ref type="bibr" target="#b21">[21]</ref>. They compute the lexical signature of a page and append it to all links to that page so that in the event of the link failing a browser plugin can use the signature to relocate the page using a search engine. The obvious flaw in their work was that it required rewriting all the links on the Web but Harrison &amp; Nelson later showed that these signatures need only be computed Just-in-Time (JIT) when a link fails <ref type="bibr" target="#b12">[12]</ref>. In their Opal system the signatures can be computed JIT by retrieving cached copies of the pages from a search engine cache, computing the signature and then using search engines to relocate the page. As discussed in Section 3.1 a JIT style approach can be effectively used to recover linked data about a URI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Semantic Web Research</head><p>Unlike the traditional Web it is not possible for semantic search engines like Sindice <ref type="bibr" target="#b22">[22]</ref> and Falcons <ref type="bibr" target="#b8">[8]</ref> to fulfil the same role as document search engines because the users in the Semantic Web domain are typically client applications rather than humans. When a human encounters a dead link they usually navigate to a search engine and enter an appropriate search phrase to find alternative sources of information. For a client application encountering a dead link they will typically have no concept or how/where to find alternative sources of information and URIs for linked data are not always ideal for searching upon compared to textual search for documents. It should be noted that as with the existing Web if the Web of Linked Data is to undergo a massive expansion in the same way things must be allowed to fail but this does not mean we shouldn't attempt to mitigate the problem as far as possible.</p><p>In terms of the Semantic Web there has been research into the versioning and synchronisation of RDF data which is relevant to aspects of our work such as Tummarello et al's RDFSync <ref type="bibr" target="#b23">[23]</ref> which is an algorithm for efficiently synchronising changes in RDF between multiple machines. This shows that change detection in RDF is non-trivial due to the inherent data isomorphism caused by the use of blank nodes but also shows that it can be achieved in an efficient manner. More recent research from Papavassiliou et al <ref type="bibr" target="#b20">[20]</ref> has shown that using information about very basic changes in the RDF -such as that provided by systems like RDF-Sync or All About That (see Section 3.2) -can be used to build applications which provide useful information to end users. In the case of Papavassiliou's et al's paper they built a system which furnished users with high level descriptions of how RDFS vocabularies have changed in order to aid users in working with such vocabularies. In addition there are systems like the Talis Platform<ref type="foot" target="#foot_0">2</ref> which is a Semantic Web store that implements a versioning mechanism whereby updates can be made via a Changeset protocol <ref type="bibr" target="#b1">[1]</ref>. As part of this protocol they utilise a useful lightweight vocabulary for publishing changes in RDF data as RDF which as will be discussed in Section 3.2.3 we reuse in our own system.</p><p>Regarding Semantic Web specific link integrity problems the research has largely focused on the co-reference problem. Since there are many organisations publishing similar data semantically (bibliographic databases being a prime example) there are frequently many URIs for a single entity such as an author. Co-reference research aims to develop ways to efficiently and accurately determine URI equivalences and refactor the data or republish this information to help other Semantic Web applications. There are several competing philosophies ranging from the Okkam approach described by Bouquet et al <ref type="bibr">[7]</ref> which advocates universally agreed URIs for each entity to the Co-reference Resolution Service (CRS) approach of Jaffri et al <ref type="bibr" target="#b15">[15]</ref> which determines co-referent URIs and republishes the information in dedicated triple stores. The CRS approach taken by the ReSIST project<ref type="foot" target="#foot_1">3</ref> within the RKB Explorer<ref type="foot" target="#foot_2">4</ref> application has potential for use in link integrity as the information provided by a CRS could be utilised in a JIT fashion as in Harrison &amp; Nelson's work and we demonstrate how this can be done in Sections <ref type="figure" target="#fig_3">3 and 4</ref>.</p><p>In terms of link maintenance for the Semantic Web there has been some research in the form of the Silk framework by Volz et al <ref type="bibr" target="#b28">[28]</ref> which is a framework for computing links between different datasets. Their approach allows users to stipulate arbitrarily complex matching criteria to do entity matching between datasets, the links produced from this can then be published via a CRS style service or added to the relevant datasets. As proposed in their later paper <ref type="bibr" target="#b27">[27]</ref> this can be used as part of a link maintenance strategy, the possibility of combining this with our approach is discussed in Section 5. In a similar vein Haslhofer and Popitsch's DSNotify system <ref type="bibr" target="#b14">[14]</ref> can monitor linked resources and inform the application when links are no longer valid using feature based similarity metrics like the Silk framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">METHOD</head><p>As we have discussed it is not realistic to maintain link integrity in a pre-emptive way since such solutions have been consistently shown not to scale to Web scale in previous work. Therefore the focus must be on recovery in the event of failure and preservation to guard against the loss of data which is considered interesting/useful to end users. As the amount of data in the Web of Linked Data starts to expand massively -particularly with linked data being adopted by an increasing number of major organisations -we expect that as with the early document web there'll be an increasing amount of content published by both big companies and individuals. Just like the document web this explosion of content will most likely include much content that is poorly maintained and will lead to increasing numbers of broken links. We have two connected goals in this work 1) to provide a means to retrieve resource descriptions in the form of linked data about a URI even when the the URI is nonfunctional and 2) to provide the means for an end user to preserve and version these descriptions. To attempt to solve this problem we present an expansion algorithm for retrieving Linked Data about a URI even if that URI itself has failed in Section 3.1 and a preservation system built using this algorithm in Section 3.2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Expansion Algorithm</head><p>Since the goal of this work is to preserve linked data it was deemed essential that as far as possible we leverage existing linked data technologies and services in order to effect this preservation. To this end we designed a relatively simple algorithm which uses simple crawling techniques which are directed by a user definable expansion profile (see Definition 1). Our aim with this algorithm is to provide resource descriptions of a URI regardless of whether the URI itself is dereferenceable.</p><p>Even in the case where a URI is used only as an identifier in the description of another resource and is not itself dereferenceable it is likely that we can still retrieve some data about it. The fact that a URI is minted only as an identifier and that the person/organisation minting the URI does not provide the means to dereference the URI does not affect our ability to find data about it assuming that the identifier is used elsewhere i.e. it is reused as part of linked data. Definition 1. An expansion profile is a Vocabulary of Interlinked Datasets (VoID) description of a set of datasets and linksets that should be used to locate linked data about the URI of interest. The VoID description may be optionally annotated with additional properties which affect the behaviour of the algorithm.</p><p>Drawing on ideas described in Alexander et al's Vocabulary of Interlinked Datasets (VoID) <ref type="bibr" target="#b3">[3]</ref> about the way it can be used to direct crawlers we decided to use VoID as the primary means of expressing an expansion profile. We introduce a couple of additional predicates since we require the means to allow end users to specify some basic characteristics of how the algorithm should behave and there is a type of service we need to express which is not contained in the VoID ontology. VoID has concepts of Datasets and Linksets, the former represent a set of data which may have SPARQL endpoint(s) and/or URI lookup endpoint(s) while the latter represent the types of interlinkings between datasets. What VoID does not have a means to express is the location of a service provided by a dataset which allows an application to retrieve URIs which are considered equivalent to a given URI -this we term a URI discovery endpoint (see Definition 2). A discovery endpoint differs from a lookup endpoint in that the latter is expected to return everything the dataset knows about the given URI as opposed to only returning equivalent URIs. Examples of existing discovery endpoints on the Semantic Web include RKBExplorer's CRSes <ref type="bibr" target="#b15">[15]</ref> and sameAs.org <ref type="foot" target="#foot_3">5</ref> . Another key difference between a lookup and discovery endpoint is that links discovered from a discovery endpoint are considered to be on the same level of the crawl for the purposes of the algorithm i.e. they do not have increased depth relative to the URI that discovery is performed upon. By this we mean that the execution of the algorithm results in performing a breadth-first depth-limited linked data crawl starting from a given URI -in this tree structure a discovery endpoint introduces sibling nodes for a URI while a lookup endpoint introduces child nodes for a URI.</p><p>Our other extensions to VoID allow individual datasets/linksets to be marked as ignored (the algorithm will not use them) and for the user to define to what depth the algorithm should crawl to (defaults to 1). These extensions are defined as part of the AAT schema detailed in Section 3.2.1. Definition 2. A URI discovery endpoint is an endpoint that when passed a URI returns a Graph containing equivalent URIs of the input URI typically in the form or owl:sameAs links.</p><p>As already stated the actual algorithm is a simple crawler which uses the input expansion profile as a guide to which potential sources of linked data it should use to try and find data about the URI of interest -this procedure is detailed in Algorithm 1. Note that the algorithm does not terminate in the event of an error retrieving data from a particular URI/endpoint and simply continues, by doing this it is still possible to retrieve some data even if the starting URI does not return a valid response. The algorithm will continue and issue queries about the URI to the various endpoints described in the given expansion profile so unless the URI refers to a document that had very poor linkages or was not indexed by the semantic search services used some RDF will be returned. This approach has similarities to the JIT style approach of Harrison &amp; Nelson <ref type="bibr" target="#b12">[12]</ref> in that there doesn't need to be any foreknowledge of the URIs you wish to recover data about when you discover they are broken since by utilising the caches and lookup services of relevant datasets it is still possible to recover data about the URI.</p><p>The basic behaviour of the algorithm is only to follow owl:sameAs and rdfs:seeAlso links but the end user can specify that any predicate be treated as a link to follow by the specifying an appropriate VoID linkset in their expansion profile. for all Equivalent URIs do 25:</p><p>Add a new pair to ToExpand 26: return Dataset There are already some existing systems which work in a similar way to our algorithm such as the sponger middle ware using in Virtuoso <ref type="bibr" target="#b2">[2]</ref>. The main difference between our algorithm and algorithms such as those in the Virtuoso sponger is that our algorithm is only interested in linked data and it does not infer/create any additional data. Unlike the Virtuoso sponger it does not attempt to turn non-linked data into RDF and it does not do any inference over the data it returns, it is designed only to find and return (in the form of an RDF dataset) linked data about the URI of interest. Yet as expansion profiles may reference any datasets and associated endpoints they wish there is no reason why a user could not direct our algorithm to utilise a service like URIBurner<ref type="foot" target="#foot_4">6</ref> which uses the Virtuoso sponger in order to get the benefits of the additional inferred data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.1">Default Profile</head><p>Since the end user of such an algorithm may not always know where to look for linked data about the URI they are interested in the algorithm has a default expansion profile which is used in the case when no profile is specified. This profile uses 3 data sources which are in our opinion important hubs of the Web of Linked Data:</p><p>• DBPedia<ref type="foot" target="#foot_5">7</ref> -The DBPedia SPARQL endpoint is used to lookup URIs</p><p>• Sindice<ref type="foot" target="#foot_6">8</ref> Cache -The Sindice Cache API<ref type="foot" target="#foot_7">9</ref> allows the retrieval of Sindice's cached copy of the RDF from a URI.</p><p>• SameAs.org<ref type="foot" target="#foot_8">10</ref> -SameAs.org provides a URI discovery endpoint (see Section 3.1 and Definition 2) which can be used to find URIs which are equivalent to a given URI</p><p>The default profile<ref type="foot" target="#foot_9">11</ref> has a max expansion depth of 1 which means it only considers URIs which are immediate neighbours of the starting URI.</p><p>In the case where the end user does know which linked data sources will have useful information about the URI they can specify their own expansion profile which is used instead of the default profile. In this case the algorithm will use the datasets and linksets they define in the profile to discover linked data about the URI of interest, for example if attempting to recover data about a person it may be useful to follow foaf:knows links.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Preservation</head><p>The preservation approach taken is to allow the end user to monitor and preserve a set of linked data that they are interested in. The data is preserved not at the data source but rather at a local level on the users server with the user able to republish this data as they desire. This is in line with the ideas of Veiga &amp; Ferreira <ref type="bibr" target="#b25">[25]</ref> in that the end user specifies the parts of the Web they want to preserve and then the software takes care of this. The data must be preserved in such a way that the original data can be efficiently extracted from it and sufficient information to provide versioning over the data is kept.</p><p>In the Semantic Web domain the objects of interest are URIs we propose that a profile of a URI be preserved (see <ref type="bibr">Definition 3)</ref>. Since the data being processed is RDF it is logically divided into triples which can be preserved and monitored individually. It is deemed necessary to store information pertaining to the temporality and provenance of each triple -when it was first seen, last updated, source URI(s) and whether it has changed or been retracted/deleted from the RDF. Definition 3. A URIs profile is the transformed and annotated form of the linked data retrievable about a given URI such that the temporality and provenance of the triples contained therein are inferable from the profile In terms of user interface the system should allow a user to view a profile both in the stored form and in its original form. The system must monitor the original data source over time updating the profiles as necessary such that it can provide a report of changes in the data to the user. Since a URI profile will contain versioning information the interface should allow a user to view a particular version of the profile.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1">Schema</head><p>As the first stage of implementation an RDF Schema for All About That 12 (AAT) is defined which embodies classes and properties which allow the description and annotation of triples in such a way that the required information as discussed in the preceding proposal can be stored for each triple. The schema defines a class for representing profiles called aat:Profile and uses the rdf:Statement class to represent triples. rdf:Statement is used as the basis of triple storage as it makes it possible for non-AAT aware tools to extract the original triples from the profile easily. A number of properties are defined which store meta data about the profile itself such as created &amp; updated date, source URI and a locally unique identifier for the profile. Similar properties are defined for triples which allow the first and last asserted dates, source URI and change status of a triple to be indicated. A key distinction in the schema is between aat:profileSource and aat:source, despite storing equivalent data two predicates are created since the former expresses the URI which is the starting point for the profile while the latter expresses all the URIs at which a given triple is asserted.</p><p>While there were alternative schemas and vocabularies available that could have potentially been used to store the required data the motivation behind designing our own schema was to provide a lightweight schema that attached all data to a single subject for ease of processing. Alternatives such as the Provenance Vocabulary by Hartig &amp; Zhao <ref type="bibr" target="#b13">[13]</ref> are far more expressive but they potentially require introducing multiple intermediate blank nodes which would significantly complicate the processing needed to implement many of the core features of AAT. Similarly the Open Provenance Model as described by Moreau et al <ref type="bibr" target="#b18">[18]</ref> is highly expressive but like the Hartig &amp; Zhao's vocabulary the RDF serialization is overly complex for use in AAT. As discussed in Section 5 there is no reason why the data contained in AAT could not be exposed in other provenance vocabularies but for AATs processing and storage a lightweight vocabulary is preferable.</p><p>The use of reification was chosen over the use of named graphs primarily due to the need to make annotations at the level of individual triples rather than at the graph level, usage is motivated by the fact that the mechanism provides a clear and obvious schema for encoding a triple and adding additional annotations to it. While reification may significantly increase the size of the data being stored initially over time this balances out compared to named graphs where it is necessary to either store many copies of the same graph or store multiple named graphs which represent a series of deltas to the original data. The other difficulty inherent in the named graphs approach is that the annotations typically would then be held separately in other named graphs which adds to the complexity of the data processing. Nevertheless named graphs are used within AAT since each profile naturally forms a named graph and AAT generates several related named graphs about each profile detailing change history and changesets as described in Section 3.2.3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2">Profile Creation &amp; Update</head><p>To create a URIs profile linked data about the URI is first retrieved using the expansion algorithm presented in Section 3.1; then using the AAT schema each triple can be transformed into a set of triples which represent an annotation of the original triple. For each triple in the original RDF a blank node is created which is then used as the subject of a set of triples which represent the required information about the original triple. Figure <ref type="figure" target="#fig_0">1</ref> shows an example triple and Figure <ref type="figure" target="#fig_1">2</ref> shows it transformed into the AAT form. A URIs profile consists of a set of transformed triples where each profile is a named graph in the underlying store.  Since the user needs to both browse the data they are preserving as well as potentially republish it, a Web based interface was designed as the primary interaction mechanism. The interface allows users to explore the data by first selecting a profile to view and then allowing them to view profile contents, export, versions and change reports. A user may also use the interface to add new URIs they wish to monitor to the system and to initiate updates to profiles (see Definition 4). Following linked data best practices <ref type="bibr" target="#b6">[6]</ref> and to provide the ability for the user to republish their preserved data multiple dereferenceable URIs for each profile are created and accessible through the Web interface. These allow the retrieval of the profile contents which consists of all the triples ever retrieved from the profile URI in the transformed form, the export of the profile (see Definition 5) and various meta graphs about a profile e.g. change history, changesets. This means that the profile of a URI has a URI and thus can itself be profiled if it was desired. Definition 4. An update of a profile occurs when AAT using the Expansion algorithm to retrieve RDF about the given URI. The triples contained are compared with the triples currently in the profile and the profile updated accordingly Definition 5. The export of a profile is the recreation of the RDF in its original form based upon the current contents of the profile. An export represents the RDF as it was last seen by AAT</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.3">Change Reporting</head><p>A key feature of AAT is the ability to generate change reports about how the RDF at the profiled URI has changed over time. To do this a number of relatively simple computations over the annotated triples can be made based primarily on the first and last asserted dates of the triples. In creating change reports four different types of changes in the RDF are looked for (see . A distinction is made between missing knowledge and retracted or deleted knowledge as it may be possible for triples to be perceived to be temporarily non-present in the RDF. For example in the event of a transient network issue making some/all of the relevant URIs unretrievable the updated date for the profile will still be updated leaving all the triples in the profile to appear missing. The length of time we require Triples to be missing before we consider them to be deleted is currently set to 7 days for our monitoring of the BBC dataset described in Section 4.2.1, this time period is a domain specific parameter that can be adjusted depending on the data that is being monitored. Definition 6. New knowledge is any triple that is new to the RDF at the profiled URI Definition 7. Changed knowledge is any triple where the object of the triple has changed. Only triples where the predicate has a cardinality of 1 can be considered to change Definition 8. Missing knowledge is any triple no longer found in the RDF at the profiled URI but which was recently seen in the RDF Definition 9. Retracted or deleted knowledge is any triple no longer found in the RDF at the profiled URI which has not been seen for a reasonable length of time In regards to the concept of changed knowledge consider some arbitrary predicates ex:one and ex:many which have cardinalities of 1 and unrestricted respectively. Since ex:one has a cardinality of 1 it can be said whenever the object of that triple has changed it is changed knowledge. Yet it cannot be said for ex:many triples as the predicate has unrestricted cardinality, therefore each triple using this predicate must be treated as a unique entity i.e. one instance of a triple using this predicate cannot be considered to replace another. In the examples the fact that &lt; A &gt; was related to &lt; C &gt; via the predicate ex:many in Example 1 and now is instead related to &lt; E &gt; in Example 2 doesn't mean they are related to &lt; E &gt; instead of &lt; C &gt;, it just means they no longer consider themselves related to &lt; C &gt;. The fact that they are related to &lt; E &gt; is new knowledge while the fact they related to &lt; C &gt; is missing/deleted knowledge, but if the value of the ex:one relationship had changed then that would be considered changed knowledge.</p><p>Example 1 Original Graph &lt;A&gt; ex:one &lt;B&gt; . &lt;A&gt; ex:many &lt;C&gt; . &lt;A&gt; ex:many &lt;D&gt; .</p><p>Example 2 Modified Graph &lt;A&gt; ex:one &lt;B&gt; . &lt;A&gt; ex:many &lt;D&gt; . &lt;A&gt; ex:many &lt;E&gt; .</p><p>When a change report is computed is it itself serialized into an RDF Graph using the Talis Changeset ontology <ref type="bibr" target="#b1">[1]</ref> which is stored as a named graph in the underlying store and republished via the web interface. Each Changeset generated links back to the previous Changeset (if one exists) such that a end user/client application consuming the data can follow the history of changes, a special URI which retrieves the most recent Changeset is provided such that users have a starting point for this. Separate to Changesets a named graph containing a history for each profile is also stored which links to all the relevant Changesets for a profile.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">RESULTS</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Expansion</head><p>To test the expansion algorithm we took a small sample or URIs which included the URIs of the authors, places associated with the authors and TV programmes from the BBC (since we use the BBC programmes dataset for our preservation tests as described in Section 4.2). The results shown in Table <ref type="table" target="#tab_1">1</ref> show that the amount of linked data that can be obtained using the default expansion profile described in Section 3.1.1 varies depending on the URI being profiled. Expanding the URI of a person potentially produces a large number of small graphs particularly if that person is a well published academic since many bibliographic databases are exposed as linked data and provide small amounts of data about people. As can be seen URIs for places return varying amounts of data which depends on the size and relative importance of the place. Conversely expanding the URIs of BBC programmes using the default profile produces very little linked data, we suspect that this is due to the type of data and the fact the linking it uses it mostly based on the BBCs ontologies. As outlined in Section 5 we plan to conduct experiments in the future to asses the efficacy of the algorithm on various types of data and using domain specific expansion profiles.</p><p>One of the benefits of the algorithm is that as can be seen in the results in Table <ref type="table" target="#tab_1">1</ref> the algorithm is trivially parallel. Increasing the number of threads used to process the discovered URIs shows a significant reduction in the time taken to retrieve the linked data. Experiments were conducted with higher number of threads but 8 threads was found to be optimal since beyond 8 threads erratic behaviour is observed due to two factors: 1. underlying limitations of the HTTP API used in terms of stable concurrent connections and 2. high volumes of concurrent access to a single site look like DoS attacks and lead to temporary bans on accessing those sites. Differences in the number of triples and graphs returned for URIs can be attributed to a couple of factors. In the case of the London URI where the difference is dramatic -over 200,000 triples difference -this is because with a smaller number of threads connections seem more likely to time out though we are unsure why this is. In the other cases many of the graphs were from the same domain name and the API used to retrieve the RDF had a bug regarding connection management for multiple concurrent connections to the same domain which caused connections to fail unexpectedly which is why a reduction in the amount of data is observed as the number of threads increased.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Preservation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.1">BBC Programmes</head><p>In order to test AAT properly it was used to monitor a subset of the BBC Programmes<ref type="foot" target="#foot_11">13</ref> dataset which is a large and constantly changing linked data set which allowed for both the testing of the scalability of AAT and for the verification that it's change detection algorithms worked as   As can be seen in Table <ref type="table" target="#tab_2">2</ref> you can see that the BBC update their dataset on a daily basis, the initial high number of changes is due to starting from a base dataset that was a couple of months old due to architectural changes made to AAT to support the use of the expansion algorithm and improve the efficiency of the system. The average number of changes being 2 is due to the fact that the typical update we see the BBC make to their data is that they add a triple describing a newly broadcast episode of a programme and update the value of the dc:modified triple. The apparently high number of 25 for the maximum changes is due to one of the program URIs failing to resolve resulting in the contents of that profile being considered to missing so the change report for each day reports those triples as removed. The relatively high number of profiles changing each day is due to the fact that as already stated many of the programmes associated with BBC 1 are broadcast daily such as soaps and news bulletins and that the BBC publish data about programmes several days before the programmes are actually broadcast.</p><p>To demonstrate the reuse of the data being harvested we created a demonstration application which is a simple web based faceted browser which lets users browse through information about recently shown BBC shows. Facets can be used to filter by Genre and Channel and the user can view detailed information about both programmes and the individual episodes. This application was presented as part of an earlier prototype of AAT described in <ref type="bibr" target="#b26">[26]</ref> and shown in Figure <ref type="figure" target="#fig_2">3</ref>. Like previous work by Papavassiliou et al <ref type="bibr" target="#b20">[20]</ref> it shows that simple information about basic triple level changes in RDF (additions, deletions etc) can be reprocessed into useful applications for end users. AATs architecture is constructed as shown in Figure <ref type="figure" target="#fig_3">4</ref>, and as can be seen it is decomposed into several components which then rely on some external standalone components: an RDF API and the expansion algorithm. AAT is theoretically agnostic of its underlying storage though in practise differences in implementation between triple stores mean only certain stores are currently viable for use as the backing store. In the early prototyping stage a RDBMS based store was used which was sufficient for initial prototyping but not scalable for real world testing so then the usage of production grade triple stores was adopted. Initially it was intended to use the open source release of Virtuoso<ref type="foot" target="#foot_12">14</ref> as the backing store but it was found that Virtuoso didn't correctly preserve boolean typed literals which created issues in the internal processing of data within AAT. 4store<ref type="foot" target="#foot_13">15</ref> was then used briefly but it was found that it was unable to handle the heavy volume of parallel read/writes which AAT uses during its data processing due to 4store's concurrency model. Currently AAT runs again AllegroGraph <ref type="foot" target="#foot_14">16</ref> since it has demonstrated in testing the ability to handle the high volumes of read/writes necessary for using AAT on the large dataset described in the preceding section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.2">Architecture &amp; Scalability</head><p>In terms of general scalability the majority of algorithms in AAT need to run on a single thread for each profile but it is trivial to process multiple profiles in parallel and this is the approach taken currently. Since work can be divided over multiple threads it will also be possible to significantly increase the scalability by dividing the work over a cluster of machines which would allow much larger datasets to be monitored efficiently.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">FUTURE WORK</head><p>There are a number of things that could be done to improve the expansion algorithm outlined in Section 3.1 with regards to both making it more intelligent in how it retrieves linked data and in conducting a detailed analyses of the data returned. Manual inspection of the data shows that it does appear to be relevant to the URI of interest but it is proposed that a full IR analysis of this is conducted in order to statistically confirm this initial assessment. Additionally as was seen in Table <ref type="table" target="#tab_1">1</ref> some types of URIs produced very little linked data using the default expansion profile, a broader analysis using domain specific profiles is necessary to ascertain whether those URIs have low levels of interlinking or if the interlinkings just use domain specific links rather than the generic owl:sameAs and rdfs:seeAlso links that are followed by default.</p><p>In terms of improving the intelligence of the algorithm at the moment it submits every URI to every SPARQL, lookup and discovery endpoint described in the expansion profile, it would improve the speed of the algorithm if it could use some decision making as to which endpoints a given URI should be submitted. Conversely though there is the possibility that this would impact the effectiveness of the algorithm so it would be necessary to conduct experiments to determine whether there is a trade off between speed and accuracy. It is also worth considering that searching on URIs is not the only viable mechanism for finding additional linked data about a URI of interest. Using terms extracted from the RDF such as the objects of rdfs:label or dc:title triples would provide a way to augment URI based lookup with term/text based search results from semantic search engines. There are already frameworks like Silk <ref type="bibr" target="#b28">[28]</ref> which can be used to do this and it would be useful to integrate the Silk framework with the expansion algorithm.</p><p>One limitation inherent in AAT is that currently is does not do any kind of special handling of blank nodes which means that if data contains blank nodes AAT will continuously think it has encountered new knowledge when most likely it has not. For the data we have worked with so far this is generally not an issue since the linked data community tends to avoid blank nodes but if we are to provide for preserving all kinds of RDF effectively then we need to handle blank nodes properly. Solving this problem may involve doing some sub-graph matching and isomorphism to see if the sections of the graph that contain blank nodes can be mapped to the previously seen sections of the graph as in Tummarello et al's RDFSync <ref type="bibr" target="#b23">[23]</ref>. The blank nodes themselves could either be left as-is or they could be translated to URIs as done by systems like the Talis 17 platform.</p><p>Given that this work was inspired by traditional link integrity techniques from hypermedia it is interesting to note that it has the potential to be applied back to the document web since there is increasing cross-over between the document and data web primarily due to the increasing uptake of RDFa. As increasing numbers of documents embed structured data using RDFa it will become possible to preserve and monitor the structured information embedded in ordinary web pages in the same way as can be done with linked data now, therefore we envisage this as having applications in automated monitoring and maintenance of document based websites.</p><p>As mentioned in Section 3.2.1 a lightweight schema is used 17 http://www.talis.com/platform by AAT to annotate and store the data but there are alternative vocabularies that could have been used such as the provenance ontology <ref type="bibr" target="#b13">[13]</ref> and the open provenance model <ref type="bibr" target="#b18">[18]</ref>. It would be a fairly easy and potentially useful enhancement to map the AAT schema to these vocabularies such that the data could be retrieved in the desired form by users/client applications designed to work with those formats.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">CONCLUSION</head><p>In this work we have introduced a simple but powerful expansion algorithm which can be used to retrieve linked data about a URI even when that URI is not resolvable. This provides an important tool for preserving data in the Semantic Web and recovering from data loss and shows that in the Semantic Web links themselves can be exploited as a means to recover from broken links. As we have outlined in Sections 4.1 and 5 there is a need to conduct a detailed analysis of the algorithm to asses it's efficacy for a wider variety of URIs and using domain specific expansion profiles. Depending on the results of this analysis the algorithm may need to be further refined to improve both it's speed and accuracy.</p><p>We have also presented the All About That (AAT) system which allows users to monitor and preserve linked data they are interested in using the expansion algorithm as the primary retrieval method for deciding which linked data to preserve based on a starting URI. As we demonstrated in Section 4.2.1 we envisage the usage of such a system as a base on which to build rich Semantic Web applications that can take and present the changing data in interesting and useful ways to end users. It also fulfils a role in the overall goal of our research which is to provide a suite of algorithms and systems which can be used to manage both data and link integrity on the Semantic Web.</p><p>As has been discussed in Section 5 there are some limitations in the current versions of our algorithm and the AAT system which we intend to investigate and address in the future. It is clear that there is still a significant amount of work to be done to create a comprehensive set of tools such that they can be applied to as wide a variety of data on the Semantic Web as is possible and experiences of past research in link integrity for the document Web tells us that there will be no perfect solution.</p><p>Despite this it is our belief that as the Semantic Web grows data and link integrity will be increasingly important issues to users as their applications come to rely upon linked data. There is a need to have systems in place such that data can be preserved and accessed even if the original sources are gone or unavailable. This has already been seen with the release of services like the Sindice Cache API 18 which is used as one of the data sources in the default expansion profile (see Section 3.1.1). Additionally with rising adoption of RDFa embedded inside documents on the web systems like this become applicable for the preservation of the structured data embedded in the document based Web as discussed in Section 5.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Original Triple</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Triple transformed to AAT Annotated Form</figDesc><graphic coords="5,321.85,115.81,225.94,127.66" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: BBC Programmes Demonstration Application built on top of data from AAT</figDesc><graphic coords="8,316.81,53.80,225.95,260.52" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: All About That Architecture</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>Sample Expansion Algorithm ResultsURITotal Graphs Total Triples Retrieval Time (seconds) Thread Used</figDesc><table><row><cell>Rob Vesse</cell><cell>4</cell><cell>115</cell><cell>13.8</cell><cell>1</cell></row><row><cell>http://id.ecs.soton.ac.uk/person/11471</cell><cell>4</cell><cell>115</cell><cell>1.8</cell><cell>2</cell></row><row><cell></cell><cell>4</cell><cell>115</cell><cell>1.8</cell><cell>4</cell></row><row><cell></cell><cell>4</cell><cell>115</cell><cell>2.1</cell><cell>8</cell></row><row><cell>Wendy Hall</cell><cell>691</cell><cell>4,068</cell><cell>786.3</cell><cell>1</cell></row><row><cell>http://id.ecs.soton.ac.uk/person/1650</cell><cell>692</cell><cell>4,070</cell><cell>383.8</cell><cell>2</cell></row><row><cell></cell><cell>692</cell><cell>4,070</cell><cell>375.9</cell><cell>4</cell></row><row><cell></cell><cell>692</cell><cell>4,070</cell><cell>359.6</cell><cell>8</cell></row><row><cell>Les Carr</cell><cell>368</cell><cell>2,694</cell><cell>438.9</cell><cell>1</cell></row><row><cell>http://id.ecs.soton.ac.uk/person/60</cell><cell>279</cell><cell>2,516</cell><cell>109.5</cell><cell>2</cell></row><row><cell></cell><cell>238</cell><cell>2,434</cell><cell>75.9</cell><cell>4</cell></row><row><cell></cell><cell>204</cell><cell>2,366</cell><cell>64.8</cell><cell>8</cell></row><row><cell>Ilkeston</cell><cell>6</cell><cell>444</cell><cell>19.1</cell><cell>1</cell></row><row><cell>http://dbpedia.org/resource/Ilkeston</cell><cell>5</cell><cell>393</cell><cell>13.5</cell><cell>2</cell></row><row><cell></cell><cell>5</cell><cell>416</cell><cell>9.6</cell><cell>4</cell></row><row><cell></cell><cell>5</cell><cell>393</cell><cell>5.3</cell><cell>8</cell></row><row><cell>Southampton</cell><cell>24</cell><cell>3,735</cell><cell>57.2</cell><cell>1</cell></row><row><cell>http://dbpedia.org/resource/Southampton</cell><cell>23</cell><cell>3,497</cell><cell>43.8</cell><cell>2</cell></row><row><cell></cell><cell>23</cell><cell>3,497</cell><cell>27.3</cell><cell>4</cell></row><row><cell></cell><cell>23</cell><cell>3,497</cell><cell>55.3</cell><cell>8</cell></row><row><cell>Nottingham</cell><cell>17</cell><cell>4,154</cell><cell>41.4</cell><cell>1</cell></row><row><cell>http://dbpedia.org/resource/Nottingham</cell><cell>16</cell><cell>4,048</cell><cell>39.5</cell><cell>2</cell></row><row><cell></cell><cell>16</cell><cell>4,048</cell><cell>27.4</cell><cell>4</cell></row><row><cell></cell><cell>16</cell><cell>4,048</cell><cell>25.9</cell><cell>8</cell></row><row><cell>London</cell><cell>13</cell><cell>53,886</cell><cell>142.4</cell><cell>1</cell></row><row><cell>http://dbpedia.org/resource/</cell><cell>13</cell><cell>53,870</cell><cell>211.9</cell><cell>2</cell></row><row><cell></cell><cell>13</cell><cell>53,870</cell><cell>149.8</cell><cell>4</cell></row><row><cell></cell><cell>14</cell><cell>280,424</cell><cell>385.8</cell><cell>8</cell></row><row><cell>Eastenders</cell><cell>2</cell><cell>612</cell><cell>1.8</cell><cell>1</cell></row><row><cell>http://www.bbc.co.uk/programmes/b006m86d</cell><cell>2</cell><cell>612</cell><cell>0.7</cell><cell>2</cell></row><row><cell></cell><cell>2</cell><cell>612</cell><cell>0.6</cell><cell>4</cell></row><row><cell></cell><cell>2</cell><cell>612</cell><cell>0.7</cell><cell>8</cell></row><row><cell>Panorama</cell><cell>2</cell><cell>174</cell><cell>1.4</cell><cell>1</cell></row><row><cell>http://www.bbc.co.uk/programmes/b006t14n</cell><cell>2</cell><cell>174</cell><cell>0.9</cell><cell>2</cell></row><row><cell></cell><cell>2</cell><cell>174</cell><cell>0.7</cell><cell>4</cell></row><row><cell></cell><cell>2</cell><cell>174</cell><cell>0.6</cell><cell>8</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>BBC Programmes preserved dataset size over 1 week Date and Time Number of Changed Profiles Average Changes per Profile Max. Changes Min.</figDesc><table><row><cell>Changes</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2</head><label>2</label><figDesc>demonstrates the average number of changes detected over just a short period.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0">http://www.talis.com/platform</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_1">http://www.resist-noe.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_2">http://www.rkbexplorer.com</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_3">http://www.sameas.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_4">http://www.uriburner</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_5">http://dbpedia.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_6">http://www.sindice.com</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_7">http://www.sindice.com/developers/cacheapi</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_8">http://www.sameas.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_9">http://www.dotnetrdf.org/expander/defaultProfile</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_10">This schema is available at http://www.dotnetrdf.org/ AllAboutThat/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="13" xml:id="foot_11">http://www.bbc.co.uk/programmes/developers</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="14" xml:id="foot_12">http://www.openlinksw.com/virtuoso</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="15" xml:id="foot_13">http://4store.org</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="16" xml:id="foot_14">http://www.franz.com/agraph/allegrograph/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="http://n2.talis.comn/wiki/Changeset_Protocol" />
		<title level="m">Changeset protocol</title>
				<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="http://virtuoso.openlinksw.com/Whitepapers/html/VirtSpongerWhitePaper.html" />
		<title level="m">Virtuoso sponger</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
		<respStmt>
			<orgName>OpenLink Software</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Describing linked datasets: On the design and usage of void, the &apos;vocabularly of interlinked datasets</title>
		<author>
			<persName><forename type="first">K</forename><surname>Alexander</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cyganiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hausenblas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<ptr target="http://ceur-ws.org/Vol-538/ldow2009_paper20.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Linked Data on the Web Workshop (LDOW2009)</title>
				<meeting>the Linked Data on the Web Workshop (LDOW2009)<address><addrLine>Madrid, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009-04">April 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Electronic document addressing: dealing with change</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ashman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="201" to="212" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">The world-wide web</title>
		<author>
			<persName><forename type="first">T</forename><surname>Berners-Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cailliau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Luotonen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">F</forename><surname>Nielsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Secret</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="76" to="82" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cyganiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Heath</surname></persName>
		</author>
		<ptr target="http://sites.wiwiss.fu-berlin.de/suhl/bizer/pub/LinkedDataTutorial" />
		<title level="m">How to publish linked data on the web</title>
				<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">An entity name system (ens) for the semantic web</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bouquet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Stoermer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bazzanella</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">5th European Semantic Web Conference</title>
				<meeting><address><addrLine>ESWC</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008. 2008</date>
			<biblScope unit="volume">5021</biblScope>
			<biblScope unit="page">258</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Falcons: searching and browsing entities on the semantic web</title>
		<author>
			<persName><forename type="first">G</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Ge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">WWW &apos;08: Proceeding of the 17th international conference on World Wide Web</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="1101" to="1102" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Data Integrity Problems in an Open Hypermedia Link Service</title>
		<author>
			<persName><forename type="first">H</forename><surname>Davis</surname></persName>
		</author>
		<ptr target="http://eprints.ecs.soton.ac.uk/6597/" />
		<imprint>
			<date type="published" when="1995-11">November 1995</date>
		</imprint>
		<respStmt>
			<orgName>University of Southampton</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Hypertext link integrity</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">C</forename><surname>Davis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="page">28</biblScope>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Microcosm: an open model for hypermedia with dynamic linking</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Fountain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Hall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Heath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">C</forename><surname>Davis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Hypertext: concepts, systems and applications</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="298" to="311" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Just-in-time recovery of missing web pages</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Harrison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Nelson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">HYPERTEXT &apos;06: Proceedings of the seventeenth conference on Hypertext and hypermedia</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="145" to="156" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Guide to the provenance vocabularly</title>
		<author>
			<persName><forename type="first">O</forename><surname>Hartif</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<ptr target="http://sourceforge.net/apps/mediawiki/trdf/index.php?title=Provenance_Vocabulary" />
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">DSNotify-Detecting and Fixing Broken Links in Linked Data Sets</title>
		<author>
			<persName><forename type="first">B</forename><surname>Haslhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Popitsch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 8th International Workshop on Web Semantics</title>
				<meeting>8th International Workshop on Web Semantics</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Managing uri synonymity to enable consistent reference on the semantic web</title>
		<author>
			<persName><forename type="first">A</forename><surname>Jaffri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Glaser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Millard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IRSW2008 -Identity and Reference on the Semantic Web</title>
				<imprint>
			<date type="published" when="2008">2008. 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A scalable architecture for maintaining referential integrity in distributed information systems</title>
		<author>
			<persName><forename type="first">F</forename><surname>Kappe</surname></persName>
		</author>
		<ptr target="http://www.jucs.org/jucs_1_2/a_scalable_architecture_for" />
	</analytic>
	<monogr>
		<title level="j">Journal of Universal Computer Science</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="84" to="104" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Hyper-G: A new tool for distributed hypermedia</title>
		<author>
			<persName><forename type="first">F</forename><surname>Kappe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Andrews</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Faschingbauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gaisbauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pichler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schipflinger</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1994">1994</date>
		</imprint>
		<respStmt>
			<orgName>Institutes for Information Processing Graz</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">The open provenance model</title>
		<author>
			<persName><forename type="first">L</forename><surname>Moreau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Freire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Futrelle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mcgrath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Myers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Paulson</surname></persName>
		</author>
		<ptr target="http://eprints.ecs.soton.ac.uk/14979/" />
		<imprint>
			<date type="published" when="2007-12">December 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A Community of Agents Maintaining Links in the World Wide Web (Preliminary Report)</title>
		<author>
			<persName><forename type="first">L</forename><surname>Moreau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Gray</surname></persName>
		</author>
		<ptr target="http://www.ecs.soton.ac.uk/~lavm/papers/gcWWW.ps.gz" />
	</analytic>
	<monogr>
		<title level="m">The Third International Conference and Exhibition on The Practical Application of Intelligent Agents and Multi-Agents</title>
				<meeting><address><addrLine>London, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1998-03">Mar. 1998</date>
			<biblScope unit="page" from="221" to="235" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">On Detecting High-Level Changes in RDF/S KBs</title>
		<author>
			<persName><forename type="first">V</forename><surname>Papavassiliou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Flouris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Fundulaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kotzinos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Christophides</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: 9th International Semantic Web Conference (ISWC2009)</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="473" to="488" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Robust hyperlinks: Cheap, everywhere, now</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">A</forename><surname>Phelps</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wilensky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Digital Documents: Systems and Principles</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="514" to="549" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Sindice. com: Weaving the Open Linked Data</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tummarello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Delbru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Oren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="552" to="565" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">RDFSync: efficient remote synchronization of RDF models</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tummarello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Morbidoni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bachmann-Gm "ur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Erling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007+ ASWC 2007</title>
				<meeting><address><addrLine>Busan, Korea</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2007">November 11-15, 2007. 2007</date>
			<biblScope unit="page" from="537" to="551" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Repweb: replicated web with referential integrity</title>
		<author>
			<persName><forename type="first">L</forename><surname>Veiga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ferreira</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SAC &apos;03: Proceedings of the 2003 ACM symposium on Applied computing</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="1206" to="1211" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Turning the web into an effective knowledge repository</title>
		<author>
			<persName><forename type="first">L</forename><surname>Veiga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ferreira</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICEIS 2004: Software Agents and Internet Computing</title>
				<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">14</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">All about that -a uri profiling tool for monitoring and preserving linked data</title>
		<author>
			<persName><forename type="first">R</forename><surname>Vesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Hall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Carr</surname></persName>
		</author>
		<ptr target="http://eprints.ecs.soton.ac.uk/17815" />
	</analytic>
	<monogr>
		<title level="m">ISWC 2009</title>
				<imprint>
			<date type="published" when="2009-08">August 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Discovering and maintaining links on the web of data</title>
		<author>
			<persName><forename type="first">J</forename><surname>Volz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gaedke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kobilarov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web -ISWC 2009</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">A</forename><surname>Bernstein</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Karger</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Heath</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Feigenbaum</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Maynard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Motta</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Thirunarayan</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="volume">5823</biblScope>
			<biblScope unit="page" from="650" to="665" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Silk-a link discovery framework for the web of data</title>
		<author>
			<persName><forename type="first">J</forename><surname>Volz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gaedke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kobilarov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2nd Linked Data on the Web Workshop (LDOW2009)</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
