<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">On Requirements for Federated Data Integration as a Compilation Process</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Adamou</surname></persName>
							<email>alessandro.adamou@open.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Knowledge Media Institute</orgName>
								<orgName type="institution">The Open University</orgName>
								<address>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><surname>Mathieu D'aquin</surname></persName>
							<email>mathieu.daquin@open.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">Knowledge Media Institute</orgName>
								<orgName type="institution">The Open University</orgName>
								<address>
									<country key="GB">United Kingdom</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">On Requirements for Federated Data Integration as a Compilation Process</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A83A46353FAE3B9A1C5823953E1235F3</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T15:42+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Linked Data</term>
					<term>Query federation</term>
					<term>Compilers</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Data integration problems are commonly viewed as interoperability issues, where the burden of reaching a common ground for exchanging data is distributed across the peers involved in the process. While apparently an effective approach towards standardization and interoperability, it poses a constraint to data providers who, for a variety of reasons, require backwards compatibility with proprietary or nonstandard mechanisms. Publishing a holistic data API is one such use case, where a single peer performs most of the integration work in a many-to-one scenario. Incidentally, this is also the base setting of software compilers, whose operational model is comprised of phases that perform analysis, linkage and assembly of source code and generation of intermediate code. There are several analogies with a data integration process, more so with data that live in the Semantic Web, but what requirements would a data provider need to satisfy, for an integrator to be able to query and transform its data effectively, with no further enforcements on the provider? With this paper, we inquire into what practices and essential prerequisites could turn this intuition into a concrete and exploitable vision, within Linked Data and beyond.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Open standards play an unquestionable role in the evolution of data interoperability, and an eminent example can undoubtedly be found in Linked Data. This set of principles and standards favors uniform federated querying across multiple data providers at the hands of applications. These applications, in turn, can serve many use cases, one being the exposure of an API that publishes aggregated data from multiple sources. One cannot expect such an API to conform to the same interoperability principles as the sources it draws from, due to possible backwards compatibility with legacy systems and other industrial constraints.</p><p>Implementing this process certainly benefits from standardized mechanisms for federated querying such as those offered by SPARQL, however, the translation of query results into the desired specifications relies upon the integrator itself. In Linked Data, the line is drawn on semantic interoperability with reuse of resources, be they terms of a vocabulary or data items, which leaves some loose ends, for instance as to how data URIs should be transformed if necessary.</p><p>Software compilers operate in analogy with use cases like the many-to-one scenario above, as in there, a single program analyses and links multiple files (the source code) into an output that is then transformed into an object that complies with the target specification (the machine-executable program). As the compiler literature is vast and its history long, we look into avenues for capitalizing on it.</p><p>With this paper, we intend to discuss the merits of these research questions:</p><p>RQ1. Is it possible to formulate a data integration problem based on federated querying as a compilation process? RQ2. If the answer to RQ1 is yes, what information should a data integration environment expose, for us to treat it like software code to be compiled?</p><p>Being able to answer yes to RQ1 would open up a range of possibilities for the principles and practices of data federation. Most of all, it would allow us to bring the craftsmanship of compiler experts into the fields of data integration to research the optimal answer to RQ2. This could help solve specific integration problems or optimize existing solutions, effectively allowing us to discuss 'data compilation' as a discipline by its own right.</p><p>In Section 2, we outline the above data API scenario in greater detail. On its basis, in Section 3 we reformulate the associated data integration process in terms of the classic analysis/synthesis model of compilers. Finally, in section 4 we give an insight as to what further research is being carried out on this vision.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Scenario</head><p>Given a collection of known linked data providers (hereinafter, sites) that expose a hierarchy of RDF graphs (datasets) through an interface such as SPARQL endpoints and/or dereferenceable CoolURIs, the goal is to produce a data feed published on a single endpoint (integrator ), which selectively reuses data from the sites and encodes them in a custom target language. Not uncommonly in industrial and traditional data management, this language must give the impression that their provider is 'in control'. To that end, it satisfies the following:</p><p>1. a single represented item appears as an attribute/value map; 2. attributes are named according to an in-house naming convention (i.e. no ontology property names are reused); 3. values are represented as items per (1), up to a fixed level of recursion beyond which they are identified by a reference. These references are URIs resolved by the same API that produces the data feed (i.e. the API is self-contained).</p><p>These requirements are in stark contrast with the principles of Linked Data, which dictate that providers should be free to use their own vocabularies and identifiers, and that both should be reused, rather than concealed, by others.</p><p>Finally, we assume that some sites publish meta-level descriptions of their datasets as VoID or Data Cube manifests. These, combined with other meta-level information computed by the integrator (cf. RQ2), form the site profile.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Data integration in the front/back end compiler model</head><p>A compilation problem can be formulated in terms of requirements of the target machine code, e.g. that it has to be executable by processors of a certain family with certain instruction sets and register layout. Data integration can also be approached in terms of the requirements of the final data feed, i.e. the compiled data in a target language that agents of a certain type, human or machine, must be able to read and interpret. We aim at identifying whether a similar parallel is possible in the operational model of the solutions to these problems. A traditional model of compiler design is depicted in Figure <ref type="figure" target="#fig_0">1</ref>. It has as its pivotal phase the generation of code in an intermediate language for the program at hand. This phase is preceded by an analysis part, which comprises lexical, syntactic and semantic analysis, and is followed by a synthesis part, where the code in the target language is generated and optimizations are performed <ref type="bibr" target="#b4">[5]</ref>. Also, it is the compiler itself that has to fulfil the requirements of most phases, especially synthesis ones, whereas source code is mostly required to be correct for the analysis phases not to fail (few programmers will take compiler optimizations into account when writing the code). Synthesis is also called the back end of the compiler, and the other phases its front end. The following sections break down these operational strands and seek a correspondent for each in the above scenario through query federation, where the burden of performing most of the integration lies on a single peer that we control, and that corresponds to the compiler.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Intermediate code generation</head><p>Intermediate code is generated in a language defined for and used by the compiler alone, in order to satisfy certain optimality conditions. An intermediate language is not necessary, but without one, a full native compiler would be required for each target architecture, instead of only a re-implementation of the synthesis. Porting this notion to our linked data integration scenario, without an intermediate language, all the code present in a compilation unit, i.e. an instance of output of a site (in RDF or SPARQL results) would be rewritten directly into the target language, thus reducing the potential for detecting redundant references and collapsing them in the data feed (cf. Section 2 req. 3, self-containment).</p><p>We will assume RDF triples to be the formalism of choice, given their natural inclination to several layers of interoperability, and adapt the analysis and synthesis parts accordingly<ref type="foot" target="#foot_0">1</ref> . Also, there is an interesting parallel with the threeaddress intermediate code of compilers, which is a serialized form of decision trees on binary operators. The intermediate language itself is the combination of triples and a naming convention for their nodes, e.g. resources and literals, which is entirely up to the integrator. This naming convention is not required to make sense on the outside world, that is, we disregard inherently Linked Data features of RDF such as dereferenceable URIs<ref type="foot" target="#foot_1">2</ref> . We require, however, the following:</p><p>1. globality. The naming convention should be able to rewrite URIs of aligned resources (e.g. via owl:sameAs statements) into the same URI. 2. completeness. It must apply to every possible URI that appears in the data supplied by any site involved in the integration process;</p><p>A naming convention supports a URI pattern if it satisfies globality for all its occurrences. Completeness can be satisfied even for URIs whose scheme is not known a priori: a function that, for instance, prepends a prefix to the original URI if its pattern is unsupported would be a sufficient naming convention.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Front end: analysis and assembly</head><p>Software compilers perform lexical, syntactic and semantic analysis on the source code and derivative data structures, to check if the code is an occurrence of the programming language and respects semantic requirements such as type matching and variable scopes. These phases are usually backed by symbol tables, i.e. data structures maintained by the compiler that keep track of the occurrences of entities such as variable names, function signatures and objects.</p><p>To begin with, we define the compilation unit to be an instance of the output of a site (in RDF or SPARQL results) given a query on it. The role of these analyses in data integration is ambivalent, depending on what elements we choose to be the symbols, syntax and semantics of the process. If we establish that the symbols are the constituents of RDF (URIs, literals, bnodes etc.), then the analysis part coincides with that of an RDF parser; there are no site-specific requirements other than delivering well-formed compilation units, at the price of not being able to perform per-site optimizations. If instead we apply the lexicon-syntax-semantics paradigm differently, then we can expect advantages in translating compilation units to the intermediate language. Here, we will assume that the patterns for constructing URIs in each dataset are part of the lexicon, and their instances are kept track of in the symbol table <ref type="table">.</ref> Assuming the above in our compiler model, the semantic analysis phase can now include matching of RDF types with URI patterns <ref type="bibr" target="#b1">[2]</ref> and heuristics for detecting and collapsing equivalent entities <ref type="bibr" target="#b5">[6]</ref>. As part of a process called linking, where a single object is built out of multiple compilation units, the results of this analysis (which we can assume to reside in an assembly plan maintained by the integrator) can be applied to the generation of unified intermediate code. The question then arises as to what information about the sites and their datasets should the assembly plan contain in order to perform linking effectively. In the present scenario and compilation model, the goal is to avoid query broadcasting and its network and computational overhead: it should be possible to determine the eligibility of a site as a candidate for providing relevant data, therefore worth querying, and the shape of the data it can deliver, so as to determine what ad-hoc query to issue to it. We are currently investigating into how intermediate code that transforms URIs to satisfy globality and completeness can be generated if:</p><p>1. all entities are typed, either explicitly or implicitly; 2. the relationship between a URI pattern in a dataset and the types of its instances, or their identifying property values, is explicit; 3. it is known which conventions are employed in the assertions that are materialised in the data, and which are left to inferencing: for instance, which property of an inverse property pair is used in asserted statements. <ref type="foot" target="#foot_2">3</ref>Related to (1), explicit types can be found in VoID class partitions <ref type="foot" target="#foot_3">4</ref> and Data Cube slicing <ref type="foot" target="#foot_4">5</ref> , or by sampling the dataset directly; implicit ones are obtainable through inferencing on the compilation units and the ontologies that describe their vocabularies. Requirement (2) is largely unsatisfied by the existing standards and literature and is mostly left to research. Finally, (3) finds partial fulfilment in VoID property partitioning combined with ontologies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Back end: optimization and target encoding</head><p>An optimizing compiler modifies generated intermediate and target code in order to improve certain efficiency measures that the compiler supports. What this translates to in a data integration scenario is largely under investigation, but we began by identifying certain tasks; as part of target-independent optimization:</p><p>-Consolidation of matching data items and elimination of redundant attributes, through ontology alignment and other means. -Handling query expansion; identify and construct further queries to be issued to sites in order to perform just-in-time linking. -Serial and parallel scheduling of these queries built through query expansion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>As part of target-dependent optimization:</head><p>-Determining the optimum threshold for including recursively-referenced items in one feed, beyond which their data are replaced with references. -Rewriting attribute names to avoid name clashing with attributes in other data feeds resulting from a different query to the same sites.</p><p>While we do not expect target-dependent optimization to raise significant requirements for site profiles, we hypothesize that target-independent optimization tasks can partly rely on the symbol tables generated in the front end phases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusions</head><p>We have made a case of formulating typical legacy data integration problems using the paradigm of software compilers, prognosticating that in so doing compiler optimizations may contribute to this cause. Although we have not identified previous evidence of data integration problems formulated using compilers, there is recent literature on formulating models and challenges for data integration on the Web. Paton et al. have postulated a model for continuously improving integration in a purely Linked Data setting <ref type="bibr" target="#b3">[4]</ref>, which significantly inspired our work. Hoang et al. have collated scholarly literature on the practices of semantic mashups, which our use case shares several contact points with <ref type="bibr" target="#b2">[3]</ref>. As for compiler-like approaches in the Semantic Web, we previously laid out some seminal work in the context of interpreting ontology networks <ref type="bibr" target="#b0">[1]</ref>. We are currently elaborating on the position given by this paper as applied to a concrete instantiation of its scenario, now in the process of formalizing back end requirements.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Compilation phases in the classical front/back end model.</figDesc><graphic coords="3,134.77,242.37,345.83,117.79" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">One could also opt for OWL as intermediate language, though we would have to be wary of the caveats of translating RDF triples into OWL axioms appropriately.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">The way RDF processors generate blank node IDs can be such an implementation.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">Of course some of these limitations can be overcome in SPARQL by adding optional triple patterns and unions to a query, but at a generally impracticable overhead.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">Vocabulary of Interlinked Datasets, http://www.w3.org/TR/void/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">RDF Data Cube vocabulary, http://www.w3.org/TR/vocab-data-cube/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The foundations of virtual ontology networks</title>
		<author>
			<persName><forename type="first">Alessandro</forename><surname>Adamou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Ciancarini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aldo</forename><surname>Gangemi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Valentina</forename><surname>Presutti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">I-SEMANTICS 2013 -9th International Conference on Semantic Systems</title>
				<editor>
			<persName><forename type="first">Marta</forename><surname>Sabou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Eva</forename><surname>Blomqvist</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Tommaso</forename><surname>Di Noia</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Harald</forename><surname>Sack</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Tassilo</forename><surname>Pellegrini</surname></persName>
		</editor>
		<meeting><address><addrLine>Graz, Austria</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="49" to="56" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Extracting URI patterns from SPARQL endpoints</title>
		<author>
			<persName><forename type="first">Alessandro</forename><surname>Mathieu D'aquin</surname></persName>
		</author>
		<author>
			<persName><surname>Adamou</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
		<respStmt>
			<orgName>Knowledge Media Institute</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Semantic information integration with linked data mashups approaches</title>
		<author>
			<persName><forename type="first">Hanh Huu</forename><surname>Hoang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tai</forename><surname>Nguyen-Phuoc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Duy</forename><surname>Cung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dosam</forename><surname>Khanh Truong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jason</forename><forename type="middle">J</forename><surname>Hwang</surname></persName>
		</author>
		<author>
			<persName><surname>Jung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IJDSN</title>
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Pay-as-you-go data integration for linked data: opportunities, challenges and architectures</title>
		<author>
			<persName><forename type="first">Norman</forename><forename type="middle">W</forename><surname>Paton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Klitos</forename><surname>Christodoulou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Alvaro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bijan</forename><surname>Fernandes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cornelia</forename><surname>Parsia</surname></persName>
		</author>
		<author>
			<persName><surname>Hedeler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th International Workshop on Semantic Web Information Management, SWIM 2012</title>
				<editor>
			<persName><forename type="first">Roberto</forename><surname>De</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Virgilio</forename></persName>
		</editor>
		<editor>
			<persName><forename type="first">Fausto</forename><surname>Giunchiglia</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Letizia</forename><surname>Tanca</surname></persName>
		</editor>
		<meeting>the 4th International Workshop on Semantic Web Information Management, SWIM 2012<address><addrLine>Scottsdale, AZ, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page">3</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Compiler Design</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Puntambekar</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<publisher>Technical Publications</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Mining equivalent relations from linked data</title>
		<author>
			<persName><forename type="first">Ziqi</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anna</forename><forename type="middle">Lisa</forename><surname>Gentile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Isabelle</forename><surname>Augenstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Eva</forename><surname>Blomqvist</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabio</forename><surname>Ciravegna</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Annual Meeting of the Association for Computational Linguistics, ACL 2013</title>
		<title level="s">Short Papers</title>
		<meeting><address><addrLine>Sofia, Bulgaria</addrLine></address></meeting>
		<imprint>
			<publisher>The Association for Computer Linguistics</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="289" to="293" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
