<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Avalanche: Putting the Spirit of the Web back into Semantic Web Querying</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Cosmin</forename><surname>Basca</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Abraham</forename><surname>Bernstein</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">DDIS</orgName>
								<orgName type="department" key="dep2">Department of Informatics</orgName>
								<orgName type="institution">University of Zurich</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<address>
									<settlement>Zurich</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Avalanche: Putting the Spirit of the Web back into Semantic Web Querying</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">7E5E130F735C8051F6C3AFD677E0C0EF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T20:17+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Traditionally Semantic Web applications either included a web crawler or relied on external services to gain access to the Web of Data. Recent efforts have enabled applications to query the entire Semantic Web for up-to-date results. Such approaches are based on either centralized indexing of semantically annotated metadata or link traversal and URI dereferencing as in the case of Linked Open Data. By making limiting assumptions about the information space, they violate the openness principle of the Web -a key factor for its ongoing success.</p><p>In this article we propose a technique called Avalanche, designed to allow a data surfer to query the Semantic Web transparently without making any prior assumptions about the distribution of the data -thus adhering to the openness criteria. Specifically, Avalanche can perform "live" (SPARQL) queries over the Web of Data. First, it gets on-line statistical information about the data distribution, as well as bandwidth availability. Then, it plans and executes the query in a distributed manner trying to quickly provide first answers. The main contribution of this paper is the presentation of this open and distributed SPARQL querying approach. Furthermore, we propose to extend the query planning algorithm with qualitative statistical information. We empirically evaluate Avalanche using a realistic dataset, show its strengths but also point out the challenges that still exist.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>With the introduction of the World Wide Web, the way we share knowledge and conduct day to day activities has changed fundamentally. With the advent of the Semantic Web, a Web of Data is emerging interlinking ever more machine readable data fragments represented as RDF documents or queryable semantic endpoints. It is in this ecosystem that unexplored avenues for application development are emerging.</p><p>While some application designs include a Semantic Web (SW) data crawler, others rely on services that facilitate access to the Web of Data (WoD) either through the SPARQL protocol or various APIs (i.e. Sindice or Swoogle). As the mass of data continues to grow -currently Linked Open Data (LOD) <ref type="bibr" target="#b0">[1]</ref> accounts for 4.7 billion triples -the scalability factor combined with the Web's uncontrollable nature and its heterogeneity will give raise to a new set of challenges.</p><p>A question marginally addressed today is: How to query the Web of Data ondemand, without hindering the flexible openness principle of the Web -seen as the ability to query independent un-cooperative semantic databases, not controlling their distribution, their availability or having to adhere to fixed publishing guidelines (i.e. LOD). The underlying assumptions of WoD, as with the WWW, are that (a) there exists no distribution pattern of information onto servers, (b) there is no guarantee of a working network, (c) there is no centralized resource discovery system, (d) there exists a standard (HTTP) for the retrieval of information, and (e) the size of RDF data no longer allows us to consider singlemachine systems feasible. With the serendipitous nature of Semantic Web <ref type="bibr" target="#b11">[12]</ref>, querying the global information space gives rise to new possibilities unthought of before.</p><p>Several approaches that tackle the problem of querying the entire Web of Data have emerged lately. One solution provides a centralized queryable endpoint for the Semantic Web that caches all data. This approach allows searching for and joining potentially distributed data sources. It does, however, incur the significant problem of ensuring an up-to-date cache and might face crucial scalability hurdles in the future, as the Semantic Web continues to grow.</p><p>Other approaches use the guidelines of LOD publishing to traverse the linked data cloud in search of the answer. Obviously, such a method produces up-to-date results and can detect data locations only from the URIs of bounded entities in the query. Relying on URI structure, however, may cause significant scalability issues when retrieving distributed data sets, since <ref type="bibr" target="#b0">(1)</ref> the servers dereferenced in the URI may become overloaded, and (2) it limits the possibilities of rearranging (or moving) the data around by binding the id (i.e. URI) to storage location.</p><p>Finally, traditional database federation techniques have been applied to query WoD. They rely on statistical information from queryable endpoints that are used by a mediator to build efficient query execution plans. Their main drawback is that some query execution engine is aware of the data distribution ex-ante (i.e., before the query execution). Moreover, in most cases, data sources even need to register themselves at the query execution engine with detailed information about the data they contain.</p><p>In this paper, we propose Avalanche, a novel approach for querying the Web of Data that (1) makes no assumptions about data distribution, availability, or partitioning, (2) provides up-to-date results, and ( <ref type="formula" target="#formula_4">3</ref>) is flexible as it makes no assumption about the structure of participating triple stores. Consequently, it addresses the shortcomings of previous approaches. To query WoD Avalanche provides a novel technique via means of a two-phase protocol: a discovery step, i.e. gathering statistical information about data distribution from involved hosts, and a planning optimization step over the distributed SPARQL endpoints. Hence, the main contributions of our approach are:</p><p>on-demand transparent querying over the Web of Data, without any prior knowledge about its distribution a formal description of our approach, together with possible optimizations for each step -a novel planning strategy and cost model for dealing with towards Web scale graph data a reference implementation of the Avalanche technique</p><p>In the remainder we first review the relevant related work of the current stateof-the-art. Section 3 provides a detailed description of Avalanche. In Section 4 we evaluate several planning strategies and estimate the performance of our system. In Section 5 we present several future directions and optimizations, and conclude in Section 6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related work</head><p>Several solutions for querying the Web of Data over distributed SPARQL endpoints have been proposed before. They can be grouped into two streams: (a) distributed query processing and (b) statistical information gathering over RDF sources.</p><p>Research on distributed query processing has a long history in the database field <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b8">9]</ref>. Its traditional concepts are adapted in current approaches to provide integrated access to RDF sources distributed over the Web. For instance, Yars2 <ref type="bibr" target="#b5">[6]</ref> is an end-to-end semantic search engine that uses a graph model to interactively answer queries over structured and interlinked data, collected from disparate Web sources. Another example is the DARQ engine <ref type="bibr" target="#b14">[15]</ref>, which divides a SPARQL query into several subqueries, forwards them to multiple, distributed query services and, finally, integrates the results of the subqueries. Finally, Rdfpeers <ref type="bibr" target="#b2">[3]</ref> is a distributed RDF repository that stores three copies of each triple in a peer-to-peer network, by applying global hash functions to its subject, predicate and object. Virtuoso <ref type="bibr" target="#b3">[4]</ref>, a data integration software developed by OpenLink Software, is also focused on distributed query processing. The drawback of these solutions is, however, that they assume total control over the data distributionsan unrealistic assumption in the open Web. Similarly, SemWIQ <ref type="bibr" target="#b10">[11]</ref> provides a mediator that distributes the execution of SPARQL queries transparently. Its main focus is to provide an integration and sharing system for scientific data. Whilst it does assume control over the instance distribution they assume perfect knowledge about it. Addressing this drawback some <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b16">17]</ref> propose to extend SPARQL with explicit instructions where to execute certain sub-queries. Unfortunately, this assumes an ex-ante knowledge of the data distribution on part of the query writer. Finally, Hartig et al. <ref type="bibr" target="#b6">[7]</ref> describe an approach for executing SPARQL queries over spaces structured according to the Web of Linked Data rules <ref type="bibr" target="#b0">[1]</ref>. Whilst they make no assumptions about the openness of the data space the LOD rules requires them to place the data on the URI-referenced servers-a limiting assumption for example when caching/copying data.</p><p>Research on query optimization for SPARQL includes query rewriting <ref type="bibr" target="#b7">[8]</ref>, triple pattern optimization based on selectivity estimations <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b13">14]</ref>, and on other statistical information gathering over RDF sources <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b4">5]</ref>. RDFStats <ref type="bibr" target="#b9">[10]</ref> is an extensible RDF statistics generator that records how often RDF properties are used and feeds automatically generated histograms to SemWIQ. Histograms on the combined values of SPO triples have proved to be especially useful to provide selectivity estimations for filters <ref type="bibr" target="#b18">[19]</ref>. For joins, however, histograms can grow very large and are rarely used in practice. Another approach is to compute ahead frequent paths (i.e., frequently occurring sequences of S, P or O) in the RDF data graph and keep statistics about the most beneficial ones <ref type="bibr" target="#b12">[13]</ref>. It is unclear how this would work in a highly distributed scenario. Finally, Neumann et. al <ref type="bibr" target="#b13">[14]</ref> claim that selectivity estimation is a worthwhile solution for tens of millions of RDF triples, but unsuitable for billions of triples, because the size of the data and the increasing diversity in property names lead to poor estimations, thus misguiding the query optimizer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Avalanche -System Design and Implementation</head><p>In this section, we describe the overall design of Avalanche and the underlying philosophy of the distributed query execution across large datasets spread over multiple, uncooperative servers. The major design difference between Avalanche and previous systems is that it assumes that the distribution of triples to machines participating in the query evaluation is unknown prior to query execution. Hence, our approach follows neither a federated nor a peer-to-peer model, instead the statistical discovery phase that traditionally is reserved for the (parallel) mediator component in clustered approaches, has become an individual step during each query execution phase. In the remaining part of this section, we will first illustrate our approach using a motivating example. This will lead the way towards thoroughly describing the Avalanche components and its novelty.</p><p>The system consists of six major components working together in a parallelized pipeline: the Avalanche endpoints Web Directory or Search Engine, the Statistics Requester, the Plan Generator, Plan Executor instances, Plan Materializer instances and the Query Stopper component as seen in Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>Avalanche comprises of two phases: the Query Preprocessing phase and the parallel Query Execution phase. During Query Preprocessing, participating hosts are identified via means of a Search Engine such as Sindice<ref type="foot" target="#foot_0">1</ref> or Web Directory. A lightweight endpoint-schema inverted index can also be used. Ontological prefix (the shorthand notation of the schema -i.e. foaf) and schema invariants (i.e. predicates, classes, labels, etc) are appropriate candidate entries to index. After query parsing, this information is immediately available and used to quickly trim down the number of potential endpoints. Then, all selected Avalanche endpoints are queried for the cardinality (number of instances) of each unbounded variable -statistical information that triple-stores generally posses.</p><p>In the Query Execution phase, first the query is broken down into the superset of all molecules, where a molecule is a subgraph of the overall query graph.  A combination of minimally overlapping molecules covering the directed query graph is referred to as a solution. Binding all molecules in a given solution to physical hosts (Avalanche endpoints) that may resolve them, transforms a solution into a plan. Given the size of the Web and the unknown distribution of the RDF data, Avalanche will try to optimize the execution of the query to quickly find the first K results. The proposed planning system and algorithm, though complete, will normally not be allowed to exhaust the entire search space since the cost of doing so is prohibitively expensive. Instead, the planner component strives to execute the most "promising" query plans first, while being monitored by the Query Stopper for termination conditions. To further reduce the size of the search space, a windowed version of the search algorithm can be employed -i.e. with each exploratory step only the first M molecules are considered, thus sacrificing completeness.</p><p>As shown in Figure <ref type="figure" target="#fig_0">1</ref> the Plan Generator relies on statistics about the data contained on the different hosts from the Statistics Requester. Any generated plan gets put in the Plans Queue regardless if the planner finished its overall tasks of exploring the plan space or not. Plans in the Plans Queue are fetched by Plan Executors that execute them generating partial results in parallel and put them in the Finished Plans Queue. There, they get fetched by one of the parallel executing Plan Materializers, who will merge and materialize the partial results.</p><p>To put Avalanche into perspective consider the following motivating query that executes over Linked Open Datasets describing movies and actors:</p><p>SELECT ?title ?photoCollection ?name WHERE { ?film dc:title ?title; movie:actor ?actor; owl:sameAs ?sameFilm. ?actor a foaf:Person; movie:actor_name ?name . ?sameFilm dbpedia:hasPhotoCollection ?photoCollection. ?sameFilm dbpedia:studio ''Producers Circle''; }</p><p>The goal of Avalanche is to return the list of all movie titles, their photo collections and the names of starring actors, that have been produced at "Producers Circle" studios -considering that the required information is spread with an unknown distribution over several LOD endpoints.</p><p>At a given moment during the execution of a plan, a Plan Executor instance may find itself in the state depicted in Figure <ref type="figure" target="#fig_1">2</ref> (in depth description in Section 3.2). The plan is comprised of three molecules: M1, M2, M3 and three hosts are involved: host A, host B and host C. Molecule M1 was reported to be highly selective on host A (holding Linked Movie<ref type="foot" target="#foot_1">2</ref> data), while the remainder of the plan: molecule M2 and M3, is distributed between hosts B and C (both holding DBPEDIA<ref type="foot" target="#foot_2">3</ref> data). Given that we operate in an environment where bandwidth cost is non-trivial we should not "just" transport all partial results to one central server to be joined. Instead we start with executing the highly selective (or in this case: with the lowest cardinality) molecule M1 on host A and then limit the execution space on host B by sending over the results from host A. The process repeats itself given the number of molecules in the plan and is finalized with a merge/update operation in reverse join order. It is important to note that to execute plans, hosts will need to share a common id space -a given in Semantic Web via URIs. Naturally, using RDF strings can be prohibitively expensive. To limit bandwidth requirements, we chose to employ a single global id space in the form of the SHA family of hash functions on the URIs.</p><p>The remainder of this section will detail the functionality of the most important elements: the Plan Generator, Plan Executor and Plan Materializer as well as explain how the overall pipeline stops.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Generating Query Plans</head><p>The planner's main focus is to generate query plans that are likely to produce results fast with a minimum of cost. As shown in Algorithm 1 the planner will try to optimize the construction of plans using a multi-path informed (best-first) search strategy by maximizing the Objective function of a plan. Therefore, all plans are generated in descending order of their objective function.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 1 The plan generator algorithm</head><p>Plan-Generator(Molecules, Hosts, Cardinalities) In defining the Objective function we use the statistical information gathered beforehand (result set cardinality). To ensure the generation of most productive plans, our function models the chance of finding a solution, utility U , divided by the cost of executing the query, C. Hence:</p><formula xml:id="formula_0">1 fringe = []<label>2</label></formula><formula xml:id="formula_1">Objective = U C<label>(1)</label></formula><p>An emergent challenge from preserving the openness of the query process and the flexibility of semantic data publishing, is denoted by the exponential complexity class of the plan composition space. Thus the space complexity of the problem is O(N 3 ), considering that the problem size increases by M * H with each step towards a complete plan, where H represents the total number of hosts involved and M is a measure of the query complexity (i.e. the number of unique molecules that can be extracted from the given query graph). A simple calculation for the scenario where 1000 hosts are involved and a rather large query (≈ 15 unbounded variables) might generate 500 molecules with the average depth of a plan of 10 (molecules), results in 5 million possible combinations to form plans. Not all combinations produce viable plans, so pruning low or no utility plans early is desired as seen in line 7 of the planning algorithm.</p><p>We follow the assumption that selective molecules -with low cardinalitieswill help the plan to converge faster. In the bootstrap phase the utility of the first plan node is equal to the inverse of its cardinality: CN T N 1 (where N 1 is node 1 and CN T is the cardinality) factored by the size of the plan (Edges(N 1)). Further on, we consider a join where the best-case cardinality is the minimum of the involved result set cardinalities (see Equation <ref type="formula" target="#formula_0">2</ref>). We define the cost, C for executing queries in Equation <ref type="formula" target="#formula_4">3</ref>. The cost of the first node is assumed to be constant. For all other nodes we combine:</p><p>the network latency L (between two nodes) -a measure of the time required to send the results from node N 1 to node N 2 given the bandwidth B the cost of executing on N 1 and N 2 as approximated by their cardinalities Finally, we scale this result with a measure of the current molecule size (molecule assigned to N 2) relative to the size of the whole solution, in order to encourage the choice of nodes that aid convergence.</p><formula xml:id="formula_2">U N 1,N 2 = Edges(N 1) CNT N 1 , first node min(CN T N 1 , CNT N 2 ), otherwise<label>(2)</label></formula><formula xml:id="formula_3">C N 1,N 2 = 1, first node (L + CNT N 1 B + CN T N 1 + CNT N 2</formula><p>CNT N 1 ) Edges(Solution)</p><p>Edges(N 2)</p><p>, otherwise</p><p>Extended Utility Function The main drawback of this utility function is that it assumes the lower cardinality of the two nodes is representative-an assumption that is quite wrong when searching for "rare" results given a large number of "promising" hosts. Therefore it disregards the actual join probabilities. Consider the previous example query that goes out to two almost disjoint RDF servers: one with DBPEDIA data and another with public social network data. Assuming we found an actor through some other host, the utility function will not be able to favor DBPEDIA over the other host, as it cannot evaluate the actual number of joins. Hence, if the public social network host happens to be using a better network connection, the planer will be lead astray. To overcome this effect we need a measure of join-quality. Following <ref type="bibr" target="#b15">[16]</ref> we employ bloom-filters, which are space-efficient set representation bit vectors composed of multiple hash functions. As stated by <ref type="bibr" target="#b1">[2]</ref> bloom-filters allow for a statistically solid estimation of the cardinality of the join between two sets:</p><formula xml:id="formula_5">JOIN BF1,BF2 ≈ − 1 k ln(m Z1+Z2−Z12 Z1Z2 ) ln(1 − 1 m ) (<label>4</label></formula><formula xml:id="formula_6">)</formula><p>where BF is a bloom filter, m is the number of bits in the bloom filter, k represents the number of hash functions, Z i represents the number of zero bits in BF i , and Z 12 represents the number of zero bits in the magnitude of their inner product. Since computing bloom filters for large sets is a costly operation, we propose the use of bloom filters as an extension to the previously proposed utility function only for highly selectivity molecules -where the cardinality is below a manually set threshold. Given implementation specific, execution considerations we empirically set the threshold to 1000 partial results (ids) for the given set. Consequently the extended utility EU is now defined as follows:</p><formula xml:id="formula_7">EU N 1,N 2 = w1 • JOIN BF (N 1),BF (N 2) + w2 • U N 1,N 2 , N1, N 2 selective w2 • U N 1,N 2 , otherwise<label>(5)</label></formula><p>where w1 and w2 are weights that define the importance of the employed estimation methods. We chose w1 = 0.8 and w2 = 0.2 for our experiments, which means that for selective molecules we favor a more expensive, but more realistic estimation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 2 The plan execution algorithm</head><p>Execute-Plan(P lan) </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Executing Plans</head><p>Specifically, following Algorithm 2 we start by executing the most selective molecule in the plan (steps 1 and 2 in Figure <ref type="figure" target="#fig_1">2</ref>). To perform the join (lines 10-12 in Algorithm 2) we send the results to host B and execute the join there (steps 3 and 4 in Figure <ref type="figure" target="#fig_1">2</ref>). Similarly we join the remainder of the molecules. After all join operations have ended, we need to let hosts A and B know of all the elements that did not have a join-partner by updating its structure (lines 18-20 in Algorithm 2; steps 8 to 12 in Figure <ref type="figure" target="#fig_1">2</ref>).</p><p>To increase execution performance, since many plans contain overlapping subqueries, we employ a memoization strategy by keeping partial results on the respective hosts for the duration of the query execution, while at the same time database caching strategies are in effect. As a further improvement, sitelevel memory caches can be employed, bypassing the database altogether for "popular" result sets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Materializing Plans</head><p>Once a plan has finished its execution, the Plan Executor monitoring the process will signal the Avalanche mediator by pushing the executed plan onto the Finished Plans queue. Note that the executed plans do not contain the results yet, since the matches are kept as partial tables on their respective hosts. Hence, plans in the Finished Plans Queue will be handled by a Plan Materializer that materializes the partial results as described in Algorithm 3. First, we get an unbound result variable v1 (line 6). We then try to find the next possible result variable that will produce the lowest number of merge operations (procedure getNearestResultVariable in lines 8 or 11). Having chosen the next result variable we create a partial result table (line 13) and merge it with the global result table (lines <ref type="bibr" target="#b13">[14]</ref><ref type="bibr" target="#b14">[15]</ref><ref type="bibr" target="#b15">[16]</ref><ref type="bibr" target="#b16">[17]</ref>. We finish by removing duplicates and replacing all ids with the actual strings (lines 18 and 19). To further reduce the overhead of sending the results between hosts, we use RLE compression.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Stopping the query execution</head><p>Since we have no control over distribution and availability of RDF data and SPARQL endpoints, providing a complete answer to the query is an unreasonable assumption. Instead, the Query Stopper monitors for the following stopping conditions:</p><p>a global timeout set for the whole query execution returning the first K unique results to the caller to avoid waiting for the timeout when the number of results is K we measure relative result-saturation. Specifically, we employ a sliding window to keep track of the last n received result sets. If the standard deviation (σ) of these sets falls below a given threshold then we stop execution. Using Chebyshev's inequality we stopped when 1 − 1 σ 2 &gt; 0.9.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Preliminary Evaluation</head><p>In this section we describe the experimental evaluation of the Avalanche system. We first succinctly introduce the setup and then discuss the two evaluated properties: the query execution and plan convergence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Experimental setup</head><p>We tested Avalanche using a five-node cluster. Each machine had 2GB RAM and an Intel Core 2 Duo E8500 @ 3.16GHz. We chose this small number of nodes to better illustrate Avalanche's query handling strategies, but did not measure its ability to scale. The data was gathered directly from the LOD cloud. Specifically, we employed the IEEE (66K triples), DBLP (22 millions) and ACM (13 millions) publication data. The datasets were distributed over a five-node cluster, split by their origin and chronological order (i.e. ACM articles till 2003 on host A) as shown in Table <ref type="table">4</ref>.1. Recall that as stated above Avalanche makes no assumptions about the data distribution over the nodes.</p><p>For the purpose of evaluating Avalanche we selected 3 SPARQL queries as listed in Appendix A. The queries were chosen in increasing order of complexity (in terms of number of unbound variables and triple patterns). We conducted all query executions with the following parameters: 1) timeout set to 20 seconds, 2) a stop sliding window of size 5, 3) a saturation threshold of 0.9, and 4) a selectivity threshold for bloom filter construction of 1000 while searching for a maximum of 200 results. Query execution Figure <ref type="figure" target="#fig_2">3</ref> graphs the number of query results (left) and the execution time (right) for both the default utility U and the extended utility EU introduced in Section 3. Note that the query execution time for the extended utility is somewhat higher (lower than the timeout), but it does find more answers to the queries. The time used for the extended utility is higher since gives the better plans a higher priority and executes them earlier. The execution of "useful" plans does take longer, since a non-useful plan is stopped as soon as an empty join is discovered. Hence, the saturation condition will stop the default utility earlier after having executed fewer useful plans. Given a large number of hosts we expect that the overhead of cancelling non-useful plans will overcome the cost of executing useful plans. Hence, the extended utility planer should converge faster.</p><p>As we see in this experiment, Avalanche is able to successfully execute query plans and retrieves many up-to-date results without having any prior knowledge of the data distribution. We, furthermore, see that different objective functions have a significant influence on the outcome and should play a critical role when deployed on the Semantic Web.</p><p>Planner convergence A second issue we planned to investigate is the usefulness of the convergence criteria introduced in Section 3.4. Figure <ref type="figure" target="#fig_3">4</ref> graphs the total number of results against the number of new results where the data points represent newly arriving-possibly empty-answer sets whilst disabling the stopping condition.</p><p>As an example consider query Q1. At first, the number of new results grows to a certain level. But, after having gathered ≈ 140 results, no more new results are received. A similar behavior can be seen for each of the three queries. Hence, given the experimental results the choice of a stopping condition is pertinent. The current stopping conditions would stop both queries Q1 and Q3 at the right point when the correct plateau is reached. When considering the number of results found (see also Figure <ref type="figure" target="#fig_2">3</ref>), query Q2, however, is stopped somewhat early in one of the local plateaus. The Avalanche system has shown how a completely heterogeneous distributed query engine that makes no assumptions about data distribution could be implemented. The current approach does have a number of limitations. In particular, we need to better understand the employed objective functions for the planner, investigate if the requirements put on participating triple-stores are reasonable, explore if Avalanche can be changed to a stateless model, and empirically evaluate if the approach truly scales to large number of hosts. Here we discuss each of these issues in turn.</p><p>The core optimization of the Avalanche system lies in its cost and utility function. The basic utility function only considers possible joins with no information regarding the probability of the respective join. The proposed utility extension UE estimates the join probability of two highly selective molecules. Although this improves the accuracy of the objective function, its limitation to highly selective molecules is often impractical, as many queries (such as our example query) combine highly selective molecules with non-selective ones. Hence, we need to find a probabilistic distributed join cardinality estimation for low selectivity molecules. One approach might be the usage of bloom-filter caches to store precomputed, "popular" estimates. Another might be investigating sampling techniques for distributed join estimation.</p><p>In order to support Avalanche existing triple-stores should be able to:</p><p>report statistics: cardinalities, bloom filters, other future extensions support the execution of distributed joins (common in distributed databases), which could be delegated to an intermediary but would be inefficient share the same same key space (can be URIs but would result in bandwidthintensive joins and merges)</p><p>Whilst these requirements seem simple we need to investigate how complex these extensions of triple-stores are in practice. Even better would be an extension of the SPARQL standard with the above-mentioned operations, which we will attempt to propose. The current Avalanche process assumes that hosts keep partial results throughout plan execution to reduce the cost of local database operations and that result-views are kept for the duration of a query. This limits the number of queries a host can handle. We intend to investigate if a stateless approach is feasible. Note that the simple approach-the use of REST-ful services-may not be applicable as the size of the state (i.e., the partial results) may be huge and overburden the available bandwidth.</p><p>We designed Avalanche with the need for high scalability in mind. The core idea follows the principle of decentralization. It also supports asynchrony using asynchronous HTTP requests to avoid blocking, autonomy by delegating the coordination and execution of the distributed join/update/merge operations to the hosts, concurrency through the pipeline shown in Figure <ref type="figure" target="#fig_0">1</ref>, symmetry by allowing each endpoint to act as the initiating Avalanche node for a query caller, and fault tolerance through a number of time-outs and stopping conditions. Nonetheless, an empirical evaluation of Avalanche with a large number of hosts is still missing-a non-trivial shortcoming (due to the lack of suitable, partitioned datasets and the significant experimental complexity) we intend to address in the near future.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>In this paper we presented Avalanche , a novel approach for querying the Web of Data that (1) makes no assumptions about data distribution, availability, or partitioning, (2) provides up-to-date results, and ( <ref type="formula" target="#formula_4">3</ref>) is flexible since it assumes nothing about the structure of participating triple stores. Specifically, we showed that Avalanche is able to execute non-trivial queries over distributed data-sources with an ex-ante unknown data-distribution. We showed two possible utility functions to guide the planning and execution over the distributed data-sources-the basic simple model and an extended model exploiting joinestimation. We found that whilst the simple model found some results faster it did find less results than the extended model using the same stopping criteria. We believe that if we were to query huge information spaces the overhead of badly selected plans will be subdued by the better but slower plans of the extended utility function.</p><p>To our knowledge, Avalanche is the first Semantic Web query system that makes no assumptions about the data distribution whatsoever. Whilst it is only a first implementation with a number of drawbacks it represents a first important towards bringing the spirit of the web back to triple-stores-a major condition to fulfill the vision of a truly global and open Semantic Web.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. The Avalanche execution pipeline</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>?Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Distributed Join and Update operations for a Simple Plan</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Number of retrieved results and query execution times</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Query planner convergence</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Plan Generator Statistics Requester Plans Queue Plan Executor Plan Executor Plan Executor</head><label></label><figDesc></figDesc><table><row><cell></cell><cell>Plan Materializer</cell></row><row><cell></cell><cell>Plan Materializer</cell></row><row><cell>Finished</cell><cell></cell></row><row><cell>Plans Queue</cell><cell>...</cell></row><row><cell></cell><cell>Plan Materializer</cell></row></table><note>... Results Set Query Stopper AVALANCHE Mediator Execution Pipeline AVALANCHE endpoints Web Directory or Search Engine (i.e. http://sindice.com) query preprocessing phase query execution phase</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 1 .</head><label>1</label><figDesc>Number of triples, unique subject S, object O, and predicate P distributions on the hosts. Predicates are shown by dataset.</figDesc><table><row><cell cols="4">Host # Triples # S / O # DBLP P # ACM P # IEEE P</cell></row><row><cell>Host A 7058949 1699554</cell><cell>0</cell><cell>18</cell><cell>0</cell></row><row><cell>Host B 6549326 1554767</cell><cell>0</cell><cell>18</cell><cell>14</cell></row><row><cell>Host C 6547513 2153509</cell><cell>20</cell><cell>0</cell><cell>17</cell></row><row><cell>Host D 8319504 2773740</cell><cell>19</cell><cell>0</cell><cell>0</cell></row><row><cell>Host E 7399881 2680160</cell><cell>19</cell><cell>0</cell><cell>0</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://sindice.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">http://www.linkedmdb.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">http://dbpedia.org/About</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgements This work was partially supported by the Swiss National Science Foundation under contract number 200021-118000. We would like to acknowledge Cathrin Weiss and Rob H. Warren for their help and contribution in the development and evolution of the ideas behind Avalanche.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Linked data -The story so far</title>
		<author>
			<persName><forename type="first">C</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Heath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Berners-Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Journal on Semantic Web and Information Systems</title>
				<meeting><address><addrLine>IJSWIS</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Network applications of bloom filters: A survey</title>
		<author>
			<persName><forename type="first">A</forename><surname>Broder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mitzenmacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B I M</forename><surname>Mitzenmacher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Internet Mathematics</title>
				<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="636" to="646" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Rdfpeers: a scalable distributed RDF repository based on a structured peer-to-peer network</title>
		<author>
			<persName><forename type="first">M</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Frank</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">13th International World Wide Web Conference (WWW)</title>
				<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="650" to="657" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Virtuoso</title>
		<author>
			<persName><forename type="first">O</forename><surname>Erling</surname></persName>
		</author>
		<ptr target="http://openlinksw.com/virtuoso/" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Data summaries for on-demand queries over linked data</title>
		<author>
			<persName><forename type="first">A</forename><surname>Harth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hose</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Karnstedt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Polleres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-U</forename><surname>Sattler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Umbrich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">19th International World Wide Web Conference (WWW)</title>
				<imprint>
			<date type="published" when="2010-05">May 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Yars2: a federated repository for querying graph structured data from the web</title>
		<author>
			<persName><forename type="first">A</forename><surname>Harth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Umbrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hogan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Decker</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">6th International Semantic Web Conference (ISWC)</title>
				<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="211" to="224" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Executing SPARQL queries over the Web of linked data</title>
		<author>
			<persName><forename type="first">O</forename><surname>Hartig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bizer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-C</forename><surname>Freytag</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">8th International Semantic Web Conference (ISWC)</title>
				<imprint>
			<date type="published" when="2009-10">October 2009</date>
			<biblScope unit="page">293309</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">The SPARQL query graph model for query optimization</title>
		<author>
			<persName><forename type="first">O</forename><surname>Hartig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Heese</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">4th European Semantic Web Conference</title>
				<imprint>
			<date type="published" when="2007-06">June 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The state of the art in distributed query processing</title>
		<author>
			<persName><forename type="first">D</forename><surname>Kossmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="422" to="469" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">RDFStats -An extensible RDF statistics generator and library</title>
		<author>
			<persName><forename type="first">A</forename><surname>Langegger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wöß</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">8th International Workshop on Web Semantics</title>
				<meeting><address><addrLine>DEXA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009-09">September 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A Semantic Web middleware for virtual data integration on the web</title>
		<author>
			<persName><forename type="first">A</forename><surname>Langegger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wöß</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blöchl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">5th European Semantic Web Conference</title>
				<imprint>
			<date type="published" when="2008-06">June 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Programming Semantic Web applications: a synthesis of knowledge representation and semi-structured data Doctoral</title>
		<author>
			<persName><forename type="first">O</forename><surname>Lassila</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note type="report_type">Doctoral dissertation</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Estimating the cardinality of RDF graph patterns</title>
		<author>
			<persName><forename type="first">A</forename><surname>Maduko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Anyanwu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sheth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">16th International World Wide Web Conference (WWW)</title>
				<imprint>
			<date type="published" when="2007-05">May 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Scalable join processing on very large RDF graphs</title>
		<author>
			<persName><forename type="first">T</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">36th International Conference on Management of Data (SIGMOD)</title>
				<imprint>
			<date type="published" when="2010-06">June 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Querying distributed RDF data sources with SPARQL</title>
		<author>
			<persName><forename type="first">B</forename><surname>Quilitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Leser</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: Research and Applications</title>
				<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="524" to="538" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Optimizing distributed joins with bloom filters</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Papapetrou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Siberski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5th International Conference on Distributed Computing and Internet Technology</title>
				<meeting>the 5th International Conference on Distributed Computing and Internet Technology</meeting>
		<imprint>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="145" to="156" />
		</imprint>
	</monogr>
	<note>ICDCIT &apos;08</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Networked graphs:a declarative mechanism for SPARQL rules, SPARQL views and RDF data integration on the web</title>
		<author>
			<persName><forename type="first">S</forename><surname>Schenck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Staab</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">17th International World Wide Web Conference (WWW)</title>
				<imprint>
			<date type="published" when="2008-04">April 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Federated databases systems for managing distributed, heterogeneous and autonomous databases</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Sheth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Larson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="183" to="236" />
			<date type="published" when="1990-09">September 1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">SPARQL basic graph pattern optimization using selectivity estimation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Stocker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Seaborne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bernstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kiefer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Reynolds</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">17th International World Wide Web Conference (WWW)</title>
				<imprint>
			<date type="published" when="2008-04">April 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Optimizing SPARQL queries over disparate RDF data sources through distributed semi-joins</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zemánek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Schenk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Svatek</surname></persName>
		</author>
		<ptr target="Query2:SELECT?name?titleWHERE{?paper&lt;http://www.aktors.org/ontology/portal#has-author&gt;?author.?author&lt;http://www.aktors.org/ontology/portal#full-name&gt;?name.?paper&lt;http://www.aktors.org/ontology/portal#has-author&gt;?avi.?paper&lt;http://www.aktors.org/ontology/portal#has-title&gt;?title.?avi&lt;http://www.aktors.org/ontology/portal#full-name&gt;&quot;AbrahamBernstein" />
	</analytic>
	<monogr>
		<title level="m">7th International Semantic Web Conference (ISWC)</title>
				<imprint>
			<date type="published" when="2007-10">October 2007</date>
		</imprint>
	</monogr>
	<note>A Appendix Query 1: SELECT ?title ?author ?date WHERE { ?paperDBLP</note>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<ptr target="&lt;http://www.aktors.org/ontology/portal#full-name&gt;&quot;AbrahamBernstein&quot;.?paper&lt;http://www.aktors.org/ontology/portal#has-author&gt;?author.?paper&lt;http://www.aktors.org/ontology/portal#has-title&gt;?title.?paper&lt;http://www.aktors.org/ontology/portal#has-date&gt;?date.?paper&lt;http://www.aktors.org/ontology/portal#article-of-journal&gt;?journal.?journal&lt;http://www.aktors.org/ontology/portal#has-title&gt;&quot;ISWC/ASWC" />
		<title level="m">Query 3: SELECT ?title ?date WHERE { ?author</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
