<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">To SCRY Linked Data: Extending SPARQL the Easy Way</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Bas</forename><surname>Stringer</surname></persName>
							<email>b.stringer@vu.nl</email>
							<affiliation key="aff0">
								<orgName type="department">Centre for Integrative Bioinformatics</orgName>
								<orgName type="institution">VU University Amsterdam</orgName>
								<address>
									<region>NL</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Albert</forename><surname>Meroño-Peñuela</surname></persName>
							<affiliation key="aff1">
								<orgName type="laboratory">Knowledge Representation and Reasoning Group</orgName>
								<orgName type="institution">VU University Amsterdam</orgName>
								<address>
									<region>NL</region>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="department">Data Archiving and Networked Services</orgName>
								<orgName type="institution">KNAW</orgName>
								<address>
									<region>NL</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Antonis</forename><surname>Loizou</surname></persName>
							<affiliation key="aff1">
								<orgName type="laboratory">Knowledge Representation and Reasoning Group</orgName>
								<orgName type="institution">VU University Amsterdam</orgName>
								<address>
									<region>NL</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sanne</forename><surname>Abeln</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Centre for Integrative Bioinformatics</orgName>
								<orgName type="institution">VU University Amsterdam</orgName>
								<address>
									<region>NL</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jaap</forename><surname>Heringa</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Centre for Integrative Bioinformatics</orgName>
								<orgName type="institution">VU University Amsterdam</orgName>
								<address>
									<region>NL</region>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">To SCRY Linked Data: Extending SPARQL the Easy Way</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E8FF384DADD82B7C3F380A552B6F8C84</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:05+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Scientific communities are increasingly publishing datasets on the Web following the Linked Data principles, storing RDF graphs in triplestores and making them available for querying through SPARQL. However, solving domain-specific problems often relies on information that cannot be included in such triplestores. For example, it is virtually impossible to foresee, and precompute, all statistical tests users will want to run on these datasets, especially if data from external triplestores is involved. A straightforward solution is to query the triplestore with SPARQL and compute the required information post-hoc. However, post-hoc scripting is laborious and typically not reusable, and the computed information is not accessible within the original query. Other solutions allow this computation to happen at query time, as with SPARQL Extensible Value Testing (EVT) and Linked Data APIs. However, such approaches can be difficult to apply, due to limited interoperability and poor extensibility. In this paper we present SCRY, the SPARQL compatible service layer, which is a lightweight SPARQL endpoint that interprets parts of basic graph patterns as calls to user defined services. SCRY allows users to incorporate algorithms of arbitrary complexity within standards-compliant SPARQL queries, and to use the generated outputs directly within these same queries. Unlike traditional SPARQL endpoints, the RDF graph against which SCRY resolves its queries is generated at query time, by executing services encoded in the basic graph patterns. SCRY's federation-oriented design allows for easy integration with existing SPARQL endpoints, effectively extending their functionality in a decoupled, tool independent way and allowing the power of Semantic Web technology to be more easily applied to domain-specific problems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The Semantic Web continues to grow, reaching an ever growing number of scientific communities <ref type="bibr" target="#b10">[11]</ref>. This is driven in part by the adoption of Linked Data principles and convergent practices by these communities <ref type="bibr" target="#b7">[8]</ref>, which publish a great variety of linked scientific datasets in the Linked Open Data (LOD) cloud. This cloud currently contains over 600K RDF dumps (37B triples), ready to be queried through 640 SPARQL endpoints <ref type="bibr" target="#b2">[3]</ref>.</p><p>The diversity of available Linked Data is matched by the diversity of its consumers and their needs. For example, statisticians may want to exclude outliers from their analysis, or filter results based on the p-value of some statistical test; geographers typically need to select coordinates which fall within a certain area or distance from another point; and bioinformaticians often use shared evolutionary ancestry to transfer information between entities.</p><p>These and many other cases rely on information which is either impossible or impractical to materialize in triplestores beforehand. Whether or not an observation should be treated as an outlier depends on how one defines outliers, and the observations it is being compared with. One could precompute all pairwise distances between coordinates, but this scales quadratically with the number of entries and precludes queries spanning multiple datasets. Bioinformaticians use many different methods to predict evolutionary relatedness between biomolecules, and interpretting their results is highly context-dependent. More generally, solving domain-specific problems typically requires domain-specific tools and algorithms, whose outputs can not always be sensibly precomputed. Thus, querying such information requires it to be derived at query time.</p><p>Several approaches enabling the generation of new data and relations at query time already exist:</p><p>-The SPARQL query language includes built-in functions for basic arithmetic and string handling, and widely supported extensions are available for the most common forms of data processing, such as datatype-aware handling of literals annotated with XML schema <ref type="bibr" target="#b4">[5]</ref>. However, such general extensions can not facilitate the diverse set of domain-specific algorithms and procedures required by many users. -SPARQL 1.1 <ref type="bibr" target="#b6">[7]</ref> allows the definition of customizable procedures attached to a specific URI via Extensible Value Testing (EVT), which is currently supported by several triplestore vendors. However, EVT has some fundamental limitations: custom procedures are restricted to appear in limited query environments (e.g. BIND(), FILTER()), and queries incorporating them are not interoperable between endpoints. -Linked Data APIs <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b5">6]</ref> offer access to Linked Data in Web standard formats <ref type="bibr" target="#b13">[14]</ref> without requiring users to have extensive knowledge of RDF or SPARQL. They offer access to custom procedures through user-friendly interfaces, accessing Linked Data under the hood. Such APIs enable functional extension of Linked Data queries in a more flexible way than EVT, but greatly restrict interoperability with other Linked Data sources and the type of information that can be retrieved. -Several SPARQL endpoints allow expert users to define custom functions under the hood, e.g. Virtuoso, Jena and Stardog. Although very powerful, these features typically have a steep learning curve and, like EVT, are not interoperable with other endpoints.</p><p>Each of these approaches varies in terms of flexibility, interoperability, ease of implementation and user-friendliness. We argue many scientific communities would benefit from a combination of SPARQL's flexible, efficient manner of querying RDF data, and user-friendly access to easily customized procedures which generate RDF data at query time.</p><p>In this paper we present SCRY, the SPARQL compatible service layer. SCRY is a lightweight SPARQL endpoint that allows users to define their own services, assign them to a URI, and incorporate them in standards-compliant SPARQL queries. These services take RDF data as input and return RDF data as output, allowing users to generate and incorporate relevant information at query time. SCRY leverages SPARQL's query federation protocol to maintain interoperability with other SPARQL endpoints. Essentially, this embeds API-like functionality into pure SPARQL queries, in a standards-compliant format.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Problem Definition</head><p>Domain-specific questions often require domain-specific solutions, particularly with regard to information and relations which must be derived at query time because they are impractical to precompute. Currently available approaches facilitating this are limited in terms of flexibility, interoperability, user-friendliness, ease of implementation, or a combination thereof. We propose to address these issues by executing services at query time, generating requested data on demand. Consider the following query:</p><p>SELECT * WHERE { ?array stats:mean ?mean ; stats:sd ?sd . }</p><p>If treated as a standard graph pattern, this query would only return arrays which have their mean and standard deviation materialized in the triplestore. However, if interpretted as service calls, the query engine could execute matching statistical procedures for stats:mean and stats:sd, and return bindings with a mean and standard deviation generated at query time.</p><p>Derived values like means and standard deviations are impractical to materialize statically in a dataset, but there are many scenarios where making them query-accessible is useful, if not essential, to answer domain-specific questions. Thus, the problem we address in this paper is to access Linked Data in a manner which (1) combines SPARQL's flexibility and efficiency with the functional extension provided by Linked Data APIs or under-the-hood endpoint customisation, (2) coexists and integrates with extant SPARQL tools and endpoints by complying with current standards and ( <ref type="formula">3</ref>) is easy for users to extend with their own domain-specific services.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">SCRY</head><p>Our SPARQL compatible service layer (SCRY) acts as a lightweight SPARQL endpoint, granting users access to easily customized services during query execution. SCRY allows services and their inputs to be encoded by URIs in the basic graph patterns of SPARQL queries. Users must configure an instance of SCRY, which we will hereafer refer to as an orb, with a set of services and associated URIs. Whenever a SCRY orb is queried, it searches for these URIs in the query's graph patterns and executes the associated services, prior to resolving the query itself. Upon execution, services generate RDF data, populating an RDF graph against which the original query will be resolved. Thus, what sets SCRY apart from traditional endpoints, is that it resolves queries against RDF data generated at query time, rather than against a persistent RDF graph.</p><p>Services accessed through SCRY can involve simple tasks like rounding off a number, or running complex secondary programs using local or remote resources. Typical use involves sending a query to any conventional SPARQL endpoint, which then invokes SCRY through a federated query. Information retrieved from the primary endpoint's persistent RDF graph can be used as input for a service made available through a personalized, locally hosted SCRY orb. The SCRY orb then generates an RDF graph by executing the encoded services, evaluates the federated query against said graph, and returns the results to the primary endpoint (see Figure <ref type="figure" target="#fig_0">1</ref>). Use case-driven examples are given below.</p><p>This federation-oriented design is completely compliant with current standards, allowing SCRY to be used with any federation-capable primary endpoint. However, it also means SCRY necessarily inherits a susceptibility to network latency, from the way in which the SPARQL protocol implements query federation. Using HTTP to push serialized SPARQL queries and RDF data back and forth is relatively expensive in terms of overhead, which is particularly wasteful if the computational steps to get from input to output are short and straightforward.</p><p>SCRY is implemented in Python, using the RDFLib package <ref type="bibr" target="#b8">[9]</ref> to interpret and resolve SPARQL queries, and the Flask microframework <ref type="bibr" target="#b12">[13]</ref> to handle federation via HTTP. Services must thus be accessible from Python, either by being implemented as Python code or via calls to the shell, e.g. with os.system(). SCRY's source code, including the services demonstrated below, is available at https://github.com/bas-stringer/scry/.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Use Case 1: Statistics</head><p>We have implemented several services to supplement SPARQL's basic built-in arithmetic, for example to calculate the standard deviation of an array, and the Pearson correlation between two arrays <ref type="foot" target="#foot_0">5</ref> . In SCRY, these can be implemented in as little as 2 lines of code each -roughly an order of magnitude fewer lines than needed in Jena, Virtuoso or Stardog (see table <ref type="table" target="#tab_0">1</ref>). Social historians running the CEDAR project published statistical data of Dutch historical censuses <ref type="bibr" target="#b11">[12]</ref>. They can now run queries which include, for example, the standard deviation of the population counts of 1899 <ref type="foot" target="#foot_1">6</ref> .</p><p>Likewise, the Linked Statistical Data Analysis project<ref type="foot" target="#foot_2">7</ref> provides Linked Data for various metrics, including several precomputed statistics such as Kendall's τ correlation. However, we are interested in Pearson's r correlation instead. Querying the raw data through their endpoint and federating it to a SCRY orb allows us to calculate Pearson's correlation coefficient between, for example, infant mortality rate and corruption perception indices in 2009<ref type="foot" target="#foot_3">8</ref> .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Use Case 2: Bioinformatics</head><p>Homology is one of the most important concepts in bioinformatics. It is a term used to indicate two entities share evolutionary ancestry, which suggests those entities have a similar biological function. Thus, knowledge of an entity can cautiously be inferred from knowledge about its homologs.</p><p>The Bio2RDF project has compiled one of the largest collections of biological Linked Data, comprising nearly 12B triples which describe 1.1B unique entities <ref type="bibr" target="#b3">[4]</ref>. More recently published sources of RDF data, such as neXtProt <ref type="bibr" target="#b9">[10]</ref> and the Human Protein Atlas (HPA) <ref type="bibr" target="#b14">[15]</ref>, are not yet included therein.</p><p>Given the sheer volume of biological RDF data, making homology a queryaccessible property would have many applications in bioinformatics. To this end, we have implemented a procedure that runs the BLAST program <ref type="bibr" target="#b1">[2]</ref>: the most commonly used method to find homologs, cited nearly 55 000 times to date<ref type="foot" target="#foot_4">9</ref> .</p><p>Table <ref type="table">2</ref>. Results of an integrative query using SCRY's BLAST procedure to find the number of tissue-specific co-expressed homologs of hemoglobin β. The first column shows the name of tissues in which hemoglobin β is expressed. The second column shows the number of its homologs coexpressed in that tissue. The Human Protein Atlas lists which proteins are found where in the human body. This information is exposed as RDF, which we have loaded in a private primary endpoint. From this endpoint, we can now federate queries to a SCRY orb to invoke services. Using our BLAST procedure, for example, we can investigate coexpression: for a given query protein, we ask the primary endpoint in which tissues it is expressed; we invoke the BLAST service through a federated query to find the protein's homologs; and we ask the primary endpoint how many of those homologs are expressed in the same tissues -within a single SPARQL query. Running such a query for hemoglobin β reveals it is found in 8 different tissues, and that at least 3 of its homologs are found in each of those tissues (see table <ref type="table">2</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusions</head><p>An ever increasing number of scientific communities are adopting Semantic Web technology and Linked Data principles. Their many domain-specific problems require equally many domain-specific solutions. This is especially true when considering derived information, which is impractical to precompute, and thus must be generated at query time.</p><p>We present SCRY, an easily customized, lightweight SPARQL endpoint that facilitates executing user-defined services at query time, making their results accessible immediately within SPARQL queries. Custom procedures are implemented with relative ease, whether they perform simple statistical analysis or run complex secondary programs like BLAST. We find that extending SPARQL in this novel way is (i) an order of magnitude faster than extending other SPARQL endpoints, and (ii) compatible with any existing SPARQL 1.1 compliant endpoint.</p><p>These benefits come at the cost of a dependence on SPARQL's implementation of query federation. In particular, network latency can become an issue. Despite this limitation, SCRY provides a platform through which statistics, bioinformatics, and a variety ofz other scientific disciplines can incorporate domain-specific programs and algorithms within SPARQL queries, better enabling these diverse communities to harness the power of Semantic Web technologies.</p><p>Many roads are open for the future. First and foremost, we intend to develop a community-managed service repository, through which users can share and receive feedback on the services they implement. Furthermore, we plan on extending this work by implementing: (i) a browser-based query interface, allowing users to query their SCRY orb directly (i.e. not through federated queries); (ii) efficiency, security and authorization features, which will make it feasible to host public SCRY orbs; and (iii) more domain-specific procedures, to further demonstrate SCRY's versatility and enable more scientific communities to harness the power of semantic web technologies.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Dataflow diagram of a typical SPARQL query using SCRY, through federated queries from a primary endpoint.</figDesc><graphic coords="4,134.77,501.64,340.16,104.04" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Lines of code needed to extend various SPARQL endpoints with support for statistical functions. Besides being shorter, extended SCRY orbs are compatible with any SPARQL compliant endpoint by design, avoiding the need to rewrite similar extensions for different endpoint implementations.</figDesc><table><row><cell>Function</cell><cell cols="5">SCRY Jena Virtuoso/SQL Virtuoso/C Stardog</cell></row><row><cell>Std. deviation</cell><cell>2</cell><cell>13</cell><cell>10</cell><cell>12</cell><cell>89</cell></row><row><cell>Pearson's r</cell><cell>2</cell><cell>19</cell><cell>27</cell><cell>33</cell><cell>91</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_0">See http://bit.ly/stats-impl</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_1">See query at http://bit.ly/scry-sd</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_2">See http://stats.270a.info/.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_3">See http://bit.ly/transparency-270a</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_4">Citations counted by Google Scholar.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgements. The authors wish to express great gratitude towards Frank van Harmelen and Paul Groth, for their advice and feedback during the project; resident Python guru Maurits Dijkstra for his support with development and implementation of the program; and Laurens Rietveld and Ali Khalili for their valuable comments on this manuscript.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<ptr target="https://github.com/UKGovLD/linked-data-api" />
		<title level="m">Linked Data API</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
		<respStmt>
			<orgName>Tech. rep., UK Government Linked Data</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Gapped BLAST and PSI-BLAST: a new generation of protein database search programs</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">F</forename><surname>Altschul</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nucleic Acids Research</title>
		<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">LOD Laundromat: A Uniform Way of Publishing Other People&apos;s Dirty Data</title>
		<author>
			<persName><forename type="first">W</forename><surname>Beek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ISWC 2014</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">otherse: Bio2RDF: Towards a mashup to build bioinformatics knowledge systems</title>
		<author>
			<persName><forename type="first">F</forename><surname>Belleau</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Biomedical Informatics</title>
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>David</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">W</forename><surname>Fallside</surname></persName>
		</author>
		<ptr target="http://www.w3.org/TR/xmlschema-0/" />
		<title level="m">XML Schema Part 0: Primer Second Edition</title>
				<imprint>
			<publisher>World Wide Web Consortium</publisher>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
	<note type="report_type">Tech. rep</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">API-centric Linked Data integration: The Open PHACTS Discovery Platform case study</title>
		<author>
			<persName><forename type="first">P</forename><surname>Groth</surname></persName>
		</author>
		<ptr target="http://www.sciencedirect.com/science/article/pii/S1570826814000195,lifeScienceande-Science" />
	</analytic>
	<monogr>
		<title level="j">Web Semantics: Science, Services and Agents on the World Wide Web</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="issue">0</biblScope>
			<biblScope unit="page" from="12" to="18" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">SPARQL 1.1 Query Language</title>
		<author>
			<persName><forename type="first">S</forename><surname>Harris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Seaborne</surname></persName>
		</author>
		<ptr target="http://www.w3.org/TR/sparql11-query/" />
	</analytic>
	<monogr>
		<title level="m">rep., World Wide Web Consortium</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note type="report_type">Tech</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Heath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bizer</surname></persName>
		</author>
		<title level="m">Linked Data: Evolving the Web into a Global Data Space</title>
				<imprint>
			<publisher>Morgan and Claypool</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
	<note>1st edn</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">RDFLib Python Library</title>
		<author>
			<persName><forename type="first">D</forename><surname>Krech</surname></persName>
		</author>
		<ptr target="https://github.com/RDFLib/rdflib" />
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
		<respStmt>
			<orgName>Tech. rep.</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">neXtProt: a knowledge platform for human proteins</title>
		<author>
			<persName><forename type="first">L</forename><surname>Lane</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nucleic Acids Research</title>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Linking Open Data cloud diagram</title>
		<author>
			<persName><forename type="first">Max</forename><surname>Schmachtenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Cyganiak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename></persName>
		</author>
		<ptr target="http://lod-cloud.net/" />
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">CEDAR: The Dutch Historical Censuses as Linked Open Data</title>
		<author>
			<persName><forename type="first">A</forename><surname>Meroño-Peñuela</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guéret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ashkpour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schlobach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Semantic Web -Interoperability, Usability, Applicability</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note>under review</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Flask Python micro web application framework</title>
		<author>
			<persName><forename type="first">A</forename><surname>Ronacher</surname></persName>
		</author>
		<ptr target="http://flask.pocoo.org/" />
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
	<note type="report_type">Tech. rep</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">JSON-LD: A JSON-based Serialization for Linked Data</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sporny</surname></persName>
		</author>
		<ptr target="http://www.w3.org/TR/json-ld/" />
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>World Wide Web Consortium</publisher>
		</imprint>
	</monogr>
	<note type="report_type">Tech. rep</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Tissue-based map of the human proteome</title>
		<author>
			<persName><forename type="first">M</forename><surname>Uhlé</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
