DISWEB'06 275 Information Sharing for the Semantic Web — a Schema Transformation Approach Lucas Zamboulis and Alexandra Poulovassilis School of Computer Science and Information Systems, Birkbeck College, University of London, London WC1E 7HX {lucas,ap}@dcs.bbk.ac.uk Abstract. This paper proposes a framework for transforming and in- tegrating heterogeneous XML data sources, making use of known cor- respondences from them to ontologies expressed in the form of RDFS schemas. The paper first illustrates how correspondences to a single on- tology can be exploited. The approach is then extended to the case where correspondences may refer to multiple ontologies, themselves intercon- nected via schema transformation rules. The contribution of this research is an XML-specific approach to the automatic transformation and inte- gration of XML data, making use of RDFS ontologies as a ‘semantic bridge’. 1 Introduction This paper proposes a framework for the automatic transformation and integra- tion of heterogeneous XML data sources by exploiting known correspondences between them to ontologies expressed as RDFS schemas. Our algorithms gener- ate schema transformation rules implemented in the AutoMed heterogeneous data integration system (http://www.doc.ic.ac.uk/automed/). These rules can be used to transform an XML data source into a target format, or to in- tegrate a set of heterogeneous XML data sources into a common format. The transformation/integration may be virtual or materialised. There are several advantages of our approach, compared with say construct- ing pairwise mappings between the XML data sources, or between each data source and some known global XML format: known semantic correspondences between data sources and domain and other ontologies can be utilised for trans- forming or integrating the data sources; the correspondences from the data sources to the ontology do not need to perform a complete mapping of the data sources; and changes in a data source, or addition or removal of a data source, do not affect the other sets of correspondences. Paper outline: Section 2 compares our approach with related work. Sec- tion 3 gives an overview of AutoMed to the level of detail necessary for the purposes of this paper. Section 4 presents the process of transforming and inte- grating XML data sources which are linked to the same ontology, while Section 5 extends this to the more general case of data sources being linked to different ontologies. Section 6 gives our concluding remarks and plans for future work. 276 Data Integration and the Semantic Web 2 Related Work The work in [4, 5] also undertakes data integration through the use of ontologies. However, this is by transforming the source data into a common RDF format, in contrast to our integration approach in which the common format is an XML schema. In [10], mappings from DTDs to RDF ontologies are used in order to reformulate path queries expressed over a global ontology to equivalent XML data sources. In [1], an ontology is used as a global virtual schema for hetero- geneous XML data sources using LAV mapping rules. SWIM [3] uses mappings from various data models (including XML and relational) to RDF, in order to integrate data sources modelled in different modelling languages. In [11], XML Schema constructs are mapped to OWL constructs and evaluation of queries on the virtual OWL global schema are supported. In contrast to all of these approaches, we use RDFS schemas merely as a ‘semantic bridge’ for transforming/integrating XML data, and the target/global schema is in all cases an XML schema. Other approaches to transforming or integrating XML data which do not make use of RDF/S or OWL include [16, 18–20, 23]. Our own earlier work in [24, 25] also discussed the transformation and integration of XML data sources. However, this work was not able to make use of correspondences between the data sources and ontologies. The approach we present here is able to use information that identifies an element/attribute in one data source to be equivalent to, a superclass of, or a subclass of an element/attribute in another data source. This information is generated from the correspondences between the data sources and ontologies. This allows more semantic relationships to be inferred between the data sources, and hence more information to be retained from a data source when it is transformed into a target format. 3 Overview of AutoMed AutoMed is a heterogeneous data transformation and integration system which offers the capability to handle virtual, materialised and hybrid data integra- tion across multiple data models. It supports a low-level hypergraph-based data model (HDM) and provides facilities for specifying higher-level mod- elling languages in terms of this HDM. An HDM schema consists of a set of nodes, edges and constraints, and each modelling construct of a higher-level modelling language is specified as some combination of HDM nodes, edges and constraints. For any modelling language M specified in this way (via the API of AutoMed’s Model Definitions Repository) AutoMed provides a set of primi- tive schema transformations that can be applied to schema constructs expressed in M. In particular, for every construct of M there is an add and a delete primitive transformation which add to/delete from a schema an instance of that construct. For those constructs of M which have textual names, there is also a rename primitive transformation. Instances of modelling constructs within a particular schema are identified by means of their scheme enclosed within double chevrons hh. . .ii. AutoMed schemas DISWEB'06 277 can be incrementally transformed by applying to them a sequence of primitive transformations, each adding, deleting or renaming just one schema construct (thus, in general, AutoMed schemas may contain constructs of more than one modelling language). A sequence of primitive transformations from one schema S1 to another schema S2 is termed a pathway from S1 to S2 and denoted by S1 → S2 . All source, intermediate, and integrated schemas, and the pathways between them, are stored in AutoMed’s Schemas & Transformations Repository. Each add and delete transformation is accompanied by a query specifying the extent of the added or deleted construct in terms of the rest of the constructs in the schema. This query is expressed in a functional query language, IQL, and we will see some examples of IQL queries in Section 4. Also available are extend and contract primitive transformations which behave in the same way as add and delete except that they state that the extent of the new/removed construct cannot be precisely derived from the rest of the constructs. Each extend and contract transformation takes a pair of queries that specify a lower and an upper bound on the extent of the construct. These bounds may be Void or Any, which respectively indicate no known information about the lower or upper bound of the extent of the new construct. The queries supplied with primitive transformations can be used to translate queries or data along a transformation pathway S1 → S2 by means of query unfolding: for translating a query on S1 to a query on S2 the delete, contract and rename steps are used, while for translating data from S1 to data on S2 the add, extend and rename steps are used — we refer the reader to [14] for details. The queries supplied with primitive transformations also provide the neces- sary information for these transformations to be automatically reversible, in that each add/extend transformation is reversed by a delete/contract transforma- tion with the same arguments, while each rename is reversed by a rename with the two arguments swapped. As discussed in [15], this means that AutoMed is a both-as-view (BAV) data integration system: the add/extend steps in a transformation pathway correspond to Global-As-View (GAV) rules while the delete and contract steps correspond to Local-As-View (LAV) rules. If a GAV view is derived from solely add steps it will be exact in the terminology of [12]. If, in addition, it is derived from one or more extend steps using their lower-bound (upper-bound) queries, then the GAV view will be sound (complete) in the ter- minology of [12]. Similarly for LAV views. An in-depth comparison of BAV with the GAV, LAV and GLAV [6, 13] approaches to data integration can be found in [15, 9], while [14] discusses the use of BAV in a peer-to-peer data integration setting. 3.1 Representing XML schemas in AutoMed The standard schema definition languages for XML are DTD [21] and XML Schema [22]. Both of these provide grammars to which conforming documents adhere, and do not abstract the tree structure of the actual documents. In our schema transformation and integration context, knowing the actual structure facilitates schema traversal, structural comparison between a source and a target 278 Data Integration and the Semantic Web schema, and restructuring of the source schema(s) that are to be transformed and/or integrated. Moreover, such a schema type means that the queries supplied with the AutoMed primitive transformations are essentially path queries, which are easily generated and easily translated into XPath/XQuery for interaction with the XML data sources. In addition, it may not be the case that the all data sources have an accompanying DTD or XML Schema they conform to. We have therefore defined a simple modelling language called XML Data- Source Schema (XMLDSS) which summarises the structure of an XML docu- ment. XMLDSS schemas consist of four kinds of constructs: Element: Elements, e, are identified by a scheme hheii and are represented by nodes in the HDM. Attribute: Attributes, a, belonging to elements, e, are identified by a scheme hhe, aii. They are represented by a node in the HDM, representing the at- tribute; an edge between this node and the node representing the element e; and a cardinality constraint stating that an instance of e can have at most one instance of a associated with it, and that an instance of a can be associated with one or more instances of e. NestList: NestLists are parent-child relationships between two elements ep and ec and are identified by a scheme hhep , ec , iii, where i is the position of ec within the list of children of ep . In the HDM, they are represented by an edge between the nodes representing ep and ec ; and a cardinality constraint that states that each instance of ep is associated with zero or more instances of ec , and each instance of ec is associated with precisely one instance of ep .1 PCData: In any XMLDSS schema there is one construct with scheme hhPCDataii, representing all the instances of PCData within an XML document. In an XML document there may be elements with the same name occur- ring at different positions in the tree. In XMLDSS schemas we therefore use an identifier of the form elementN ame$count for each element in the schema, where count is a counter incremented every time the same elementN ame is encountered in a depth-first traversal of the schema. If the suffix $count is omit- ted from an element name, then the suffix $1 is assumed. For the XML docu- ments themselves, our XML wrapper generates a unique identifier of the form elementN ame$count&instanceCount for each element where instanceCount is a counter identifying each instance of elementN ame$count in the document. The XMLDSS schema, S, of an XML document, D, is derived by our XML wrapper by means of a depth-first traversal of D and is equivalent to the tree resulting as an intermediate step in the creation of a minimal dataguide [7]. However, unlike dataguides, we do not merge common sub-trees and the schema remains a tree rather than a DAG. To illustrate XMLDSS schemas, consider the following XML document: 1 Here, the fact that IQL is inherently list-based means that the ordering of children instances of ec under parent instances of ep is preserved within the extent of the NestList hhep , ec , iii. DISWEB'06 279 Dr. G. Grigoriadis 123 Prof. A. Karakassis 111 Dr. A. Papas 321 The XMLDSS schema extracted from this document is S1 in Figure 1. Note that a new root element r is generated for each XMLDSS schema, populated by a unique instance r&1 This is useful in adopting a more uniform approach to schema restructuring and schema integration by not having to consider whether schemas have the same or different roots. As mentioned earlier, after a modelling language has been specified in terms of the HDM, AutoMed automatically makes available a set of primitive transfor- mations for transforming schemas defined in that modelling language. Thus, for XMLDSS schemas there are transformations addElement (hheii,query), addAttribute (hhe, aii, query), addNestList ((hhep , ec , iii, query), and similar transformations for the extend, delete, contract and rename of Element, Attribute and NestList constructs. 4 Transforming and Integrating XML Data Sources In this section we consider first a scenario in which two XMLDSS schemas S1 and S2 are each semantically linked to an RDFS schema by means of a set of correspondences. These correspondences may be defined by a domain expert or extracted by a process of schema matching from the XMLDSS schemas and/or underlying XML data, e.g. using the techniques described in [17]. Each corre- spondence maps an XMLDSS Element or Attribute construct to an IQL query over the RDFS schema (so correspondences are LAV mappings). In Section 4.1 we show how these correspondences can be used to gener- ate a transformation pathway from S1 to an intermediate schema IS1 , and a pathway from S2 to an intermediate schema IS2 . The schemas IS1 and IS2 are ‘conformed’ in the sense that they use the same terms for the same RDFS concepts. Due to the bidirectionality of BAV, from these two pathways S1 → IS1 and S2 → IS2 can be automatically derived the reverse pathways IS1 → S1 and IS2 → S2 . 280 Data Integration and the Semantic Web In Section 4.2 we show how a transformation pathway from IS1 → IS2 can then be automatically generated. An overall transformation pathway from S1 to S2 can finally be obtained by composing the three pathways S1 → IS1 , IS1 → IS2 and IS2 → S2 . This pathway can subsequently be used to automatically translate queries expressed on S2 to operate on S1 , using AutoMed’s XML Wrapper over source S1 to return the query results. Or the pathway can be used to automatically transform data that is structured according to S1 to be structured according to S2 , and an XML document structured according to S2 can be output. In Section 4.3 we discuss the automatic integration of a number of XML data sources described by XMLDSS schemas S1 , . . . , Sn , each semantically linked to a single RDFS schema by a set of correspondences. This process extends the approach of Sections 4.1 and 4.2 to integrate a set of schemas into a single global XMLDSS schema. S1 r S2 1 r university 1 1 name staffMember school name 1 R1 Academic University 1 office belongs Staff name academic 1 name subClass name 1 name college 2 College Literal office Staff 2 teachesIn name office subClass name PCData belongs Admin 1 1 belongs PCData School Staff IS1 r IS2 1 University 1 r University.belongs. 1 University.belongs.College. College.belongs. belongs. School University.belongs. University.belongs. School. name College.belongs.School. College.belongs. 1 belongs.Staff. name School.belongs. Staff University.belongs.College.belongs. School.belongs. AcademicStaff 1 University.belongs.College.belongs. 1 2 School.belongs.Staff. office University.belongs. University.belongs. 1 College.belongs. College.belongs. University.belongs. 2 School.belongs. School.belongs. University.belongs. College College. name AcademicStaff. AcademicStaff. name office PCData 1 1 PCData Fig. 1. XMLDSS schemas S1 and S2 , RDFS schema R1 and conformed XMLDSS schemas IS1 and IS2 . DISWEB'06 281 Table 1. Correspondences between XMLDSS schema S1 and R1 S1 R1 hhuniversityii hhUniversityii hhschoolii [s | {c, u} ← hhbelongs, College, Universityii; {s, c} ← hhbelongs, School, Collegeii] hhschool, nameii [s, l |{c, u} ← hhbelongs, College, Universityii; {s, c} ← hhbelongs, School, Collegeii; {s, l} ← hhname, School, Literalii])] hhacademicii [s2 | {c, u} ← hhbelongs, College, Universityii; {s1 , c} ← hhbelongs, School, Collegeii; {s2 , s1 } ← hhbelongs, Staff, Schoolii; member s2 hhAcademicStaffii] hhnameii [o | o ← generateElemU ID 0 name0 (count[l|{c, u} ← hhbelongs, College, Universityii; {s1 , c} ← hhbelongs, School, Collegeii; {s2 , s1 } ← hhbelongs, Staff, Schoolii; member s2 hhAcademicStaffii; {s2 , l} ← hhname, Staff, Literalii])] hhofficeii [o | o ← generateElemU ID 0 of f ice0 (count [l|{c, u} ← hhbelongs, College, Universityii; {s1 , c} ← hhbelongs, School, Collegeii; {s2 , s1 } ← hhbelongs, Staff, Schoolii; member s2 hhAcademicStaffii; {s2 , l} ← hhoffice, Staff, Literalii])] 4.1 Schema Conformance In our approach, a correspondence defines an Element or Attribute of an XMLDSS schema by means of an IQL path query over an RDFS schema2 . In particular, an Element e may map either to a Class c, or to a path ending with a class-valued property of the form hhp, c1 , c2 ii, or to a path ending with a literal-valued property of the form hhp, c, Literalii; additionally, the correspon- dence may state that the instances of a class are constrained by membership in some subclass. An Attribute may map either to a literal-valued property or to a path ending with a literal-valued property. Our correspondences are similar to path-path correspondences in [1], in the sense that a path from the root of an XMLDSS schema to a node corresponds to a path in the RDFS schema. For example, Tables 1 and 2 show the correspondences between the XMLDSS schemas S1 and S2 and the RDFS schema R1 (Figure 1). In Table 1 the 1st corre- spondence maps element hhuniversityii to class hhUniversityii. The 2nd correspon- dence states that the extent of element hhschoolii corresponds to the instances of class School derived from the join of properties hhbelongs, College, Universityii 2 An RDFS schema can be represented in the HDM using five kinds of constructs: Class, Property, subClassOf, subPropertyOf, Literal. See [26] for details. 282 Data Integration and the Semantic Web Table 2. Correspondences between XMLDSS schema S2 and R1 S2 R1 hhstaffMember,nameii [{s2 , l}|{c, u} ← hhbelongs, College, Universityii; {s1 , c} ← hhbelongs, School, Collegeii; {s2 , s1 } ← hhbelongs, Staff, Schoolii; {s2 , l} ← hhname, Staff, Literalii] hhstaffMemberii [s2 | {c, u} ← hhbelongs, College, Universityii; {s1 , c} ← hhbelongs, School, Collegeii; {s2 , s1 } ← hhbelongs, Staff, Schoolii] hhofficeii [o |o ← generateElemU ID 0 of f ice0 (count [{s2 , l}|{c, u} ← hhbelongs, College, Universityii; {s1 , c} ← hhbelongs, School, Collegeii; {s2 , s1 } ← hhbelongs, Staff, Schoolii; {s2 , l} ← hhoffice, Staff, Literalii])] hhcollege, nameii [{c, l}|{c, u} ← hhname, College, Universityii; {c, l} ← hhname, College, Literalii] hhcollegeii [c | {c, u} ← hhbelongs, College, Universityii] and hhbelongs, School,Collegeii on their common class construct, College.3 In the 4th correspondence, element hhacademicii corresponds to the instances of class Staff derived from the specified path expression and that are also members of AcademicStaff. In the 5th correspondence, the IQL function generateElemUID generates as many instances for element hhnameii as specified by its second ar- gument i.e. the number of instances of the property hhname, Staff, Literalii in the path expression specified as the argument to the count function. The remaining correspondences in Tables 1 and 2 are similar. The conformance of a pair of XMLDSS schemas S1 and S2 to equivalent XMLDSS schemas IS1 and IS2 that represent the same concepts in the same way is achieved by renaming the constructs of S1 and S2 using the sets of cor- respondences from these schemas to a common ontology. For every correspondence i in the set of correspondences between an XMLDSS schema S and an ontology R, a rename AutoMed transformation is generated, as follows: 1. If i concerns an Element e: 3 The IQL query defining this correspondence may be read as “return all values s such that the pair of values {c, u} is in the extent of construct hhbelongs, College, Universityii and the pair of values {s, c} is in the extent of construct hhbelongs, School, Collegeii”. IQL is a comprehensions-based language and we refer the reader to [8] for details of its syntax, semantics and implementation. Such languages subsume query languages such as SQL-92 and OQL in expressiveness [2]. There are AutoMed wrappers for SQL and OQL data sources and these translate fragments of IQL into SQL or OQL. Translating between fragments of IQL and XPath/XQuery is also straightforward — see Section 4.4 below. DISWEB'06 283 (a) If e maps directly to a Class c, rename e to c. If the instances of c are constrained by membership in a subclass csub of c, rename e to csub . (b) Else, if e maps to a path in R ending with a class-valued Property, rename e to s, where s is the concatenation of the labels of the Class and Property constructs of the path, separated by ‘.’. If the instances of a Class c in this path are constrained by membership in a subclass, then the label of the subclass is used instead within s. (c) Else, if e maps to a path in R ending with a literal-valued Property hhp, c, Literalii, rename e as in step 1b, but without appending the label Literal to s. 2. If i concerns an Attribute a, then a must map to a path in R ending with a literal-valued Property hhp, c, Literalii, and it is renamed as Element e in step 1c. Note that not all the constructs of S1 and S2 need be mapped by correspondences to the ontology. Such constructs are not affected and are treated as-is by the subsequent schema restructuring phase. Figure 1 shows the schemas IS1 and IS2 produced by the application of the renamings to S1 and S2 arising from the sets of correspondences in Tables 1 and 2. 4.2 Schema restructuring In order to next transform schema IS1 to have the same structure as schema IS2 , we have developed a schema restructuring algorithm that, given a source XMLDSS schema S and a target XMLDSS schema T , automatically transforms S to the structure of T , given that S and T have been previously conformed. This algorithm is able to use information that identifies an element/attribute in S to be equivalent to, a superclass of, or a subclass of an element/attribute in T . This information may be produced by, for example, a schema matching tool or, in our context here, via correspondences to an RDFS ontology. We note that this algorithm is an extension of our earlier schema restructuring algorithm described in [25], which could only handle equivalence information between ele- ments/attributes and could not exploit superclass and subclass information. The extended algorithm allows more semantic relationships to be inferred between S and T , and hence more information to be retained from S when it is transformed into T . The restructuring algorithm consists of a “growing phase” where T is traversed in a depth-first fashion and S is augmented with any constructs from T that it is missing, followed by a “shrinking phase” where the augmented S is traversed in a depth-first fashion and any contruct present in S but not in T is removed. The AutoMed transformations generated by the schema restructuring algo- rithm for transforming schema IS1 to schema IS2 are illustrated in Table 3. In the growing phase, the first three transformations concern the element hhStaffii of IS2 . This element is inserted in IS1 using Element hhAcademicStaffii, which corresponds to a class that is a subclass of the class hhStaffii corresponds to in 284 Data Integration and the Semantic Web After growing phase r 1 1 University.belongs. University.belongs. College.belongs.School. College.belongs. University belongs.Staff. name School.belongs. Staff 1 1 University.belongs. University.belongs.College. University.belongs.College.belongs. College.belongs. belongs. School School.belongs.Staff. office School. name 1 1 University.belongs.College.belongs. University.belongs. University.belongs. College School.belongs. AcademicStaff College. name 1 2 2 University.belongs. University.belongs. College.belongs. College.belongs. School.belongs. School.belongs. AcademicStaff. AcademicStaff. name office 1 1 PCData Fig. 2. Applying the growing phase to schema IS1 . the RDFS ontology; the ren IQL function is used here to rename the instances of Element hhAcademicStaffii appropriately. After that, a NestList is inserted, linking hhStaffii to its parent, which is the root r, using the path from r to AcademicStaff. hhStaffii in T is not linked to the PCData construct, and there- fore its attribute is handled next. The addAttribute transformation performs an element-to-attribute transformation by inserting Attribute hhStaff, nameii using the extents of hhAcademicStaff, nameii and hhname, PCDataii. The following three transformations insert Element hhStaff.officeii along with its incoming and outgoing NestList constructs in a similar manner. Then the last two transfor- mations insert Element hhCollegeii along with its Attribute and its incoming NestList. Since there is no information relevant to the extents of these con- structs in S, extend transformations are used, with Void as the lower-bound query. Note however that the upper-bound query generates a synthetic extent for both the hhCollegeii Element and its incoming NestList (for the latter, the IQL function generateNestLists is used4 ); this is to make sure that if any fol- lowing transformations attach other constructs to hhCollegeii, their extent is not lost (assuming that these constructs are not themselves inserted with extend transformations and the constants Void and Any as the lower-bound and upper- bound queries). At the end of the growing phase, the transformations applied to schema IS1 result in the intermediate schema shown in Figure 2. 4 Generally, function generateNestLists either accepts Element schemes hhaii and hhbii, with equal size of extents, and generates the extent of NestList construct hha, bii; or, it accepts Element schemes hhaii and hhbii, where the extent of hhaii is a single instance, and generates the extent of NestList construct hha, bii. DISWEB'06 285 Table 3. Transformation pathways IS1 → IS2 . For readability, only the part of the name of an element/attribute needed to uniquely identify it within the schema is used. Growing phase: addElement(hhStaffii,[ren a 0 Staf f 0 | a ← hhAcademicStaffii]) addNestList(hhr, Staff, 2ii,[{r, s}|{r, u} ← hhr, University, 1ii; {u, s} ← hhUniversity, School, 1ii; {s, a} ← hhSchool, AcademicStaff, 1ii]) addAttribute(hhStaff, nameii, [{o, p}|{a, p} ← [{a, p}|{a, n} ← hhAcademicStaff, name, 1ii; {n, p} ← hhname, PCData, 1ii]; o ← [ren a 0 Staf f 0 ]]) addElement(hhStaff.officeii,[ren o 0 Staf f.of f ice0 |o ← hhAcademicStaff.officeii]) addNestList(hhStaff, Staff.office, 1ii, [{s, o2}|{a, o1} ← hhAcademicStaff, AcademicStaff.officeii; s ← [ren a 0 Staf f 0 ]; o2 ← [ren o1 0 Staf f.of f ice0 ]]) addNestList(hhStaff.office, PCData, 2ii, [{o2, p}|{o1, p} ← hhAcademicStaff.office, PCData, 1ii; o2 ← [ren o1 0 Staf f.of f ice0 ]]) extendElement(hhCollegeii,Void, [c|c ← generateElemU ID 0 College0 hhAcademicStaff.officeii]) extendNestList(hhStaff.office, Collegeii,Void [{s, c}|{s, c} ← generateN estLists hhStaff.officeii hhCollegeii]) extendAttribute(hhCollege, College.nameii,Void,Any) Shrinking phase: deleteNestList(hhr, Universityii,[{r$1&1, U niversity$1&1}]) contractNestList(hhUniversity, Schoolii,Void,Any) contractElement(hhUniversityii,Void,Any) contractNestList(hhSchool, AcademicStaffii,Void,Any) contractAttribute(hhSchool, nameii,Void,Any) contractElement(hhSchoolii,Void,Any) contractNestList(hhAcademicStaff, AcademicStaff.nameii,Void, [{(ren o1 0 AcademicStaf f 0 ), o2 }|{o1 , o2 , o3 } ← skolemiseEdge hhStaff, Staff.nameii hhAcademicStaff.nameii]) contractNestList(hhAcademicStaff.name, PCDataii,Void, [{o2 , o3 }|{o1 , o2 , o3 } ← skolemiseEdge hhStaff, Staff.nameii hhAcademicStaff.nameii]) contractElement(hhAcademicStaff.nameii,Void, [o2 | {o1 , o2 , o3 } ← skolemiseEdge hhStaff, Staff.nameii hhAcademicStaff.nameii]) contractNestList(hhAcademicStaff, AcademicStaff.officeii,Void,hhStaff, Staff.officeii) contractNestList(hhAcademicStaff.office, PCDataii,Void,hhStaff.office, PCDataii) contractNestList(hhAcademicStaff.officeii,Void,hhStaff.officeii) contractNestList(hhAcademicStaffii,Void,hhStaffii The shrinking phase operates similarly. The transformations removing hhAcademicStaff,AcademicStaff.nameii, hhAcademicStaff, PCDataii and hhAcademicStaff.nameii specify the inverse of the element-to-attribute transfor- mation of the growing phase. To support attribute-to-element transformations, the IQL function skolemiseEdge is used; it takes as input a NestList hhep , ec ii, and an Element hheii, which have the same extent size, and for each pair of instances e of hheii and {ep , ec } of hhep , ec ii generates a tuple {ep , e, ec }. 286 Data Integration and the Semantic Web The result of applying the transformations of Table 3 to schema IS1 is IS2 illustrated in Figure 1. There now exists a transformation pathway S1 → IS1 → IS2 → S2 , which can be used to query S2 by obtaining data from the data source corresponding to schema S1 . For example, if this is the XML document of Section 3.1, the IQL query [{n, p}|{s, n} ← hhstaffMember, nameii; {s, o} ← hhstaffMember, officeii; {o, p} ← hhoffice, PCDataii] returns the following result: [{‘Dr. G. Grigoriadis’,‘123’}, {‘P rof. A. Karakassis’,‘111’}, {‘Dr. A. P apas’,‘321’}] We could also use the pathway S1 → IS1 → IS2 → S2 to materialise S2 using the data from the data source corresponding to S1 — see [25] for details of this process. The separation of the growing phase from the shrinking phase ensures the completeness of the restructuring algorithm: the growing phase considers in turn each node in the target schema T and generates if necessary a query defining this node in terms of the source schema S; conversely, the shrinking phase considers in turn each node of S and generates if necessary a query defining this node in terms of T; inserting new target schema constructs before removing any redundant source schema constructs ensures that the constructs needed to define the extent of any construct are always present in the current schema. 4.3 Schema integration Consider now the integration of a set of XMLDSS schemas S1 , . . . , Sn all con- forming to some ontology R into a global XMLDSS schema. The renaming algo- rithm of Section 4.1 can first be used to produce intermediate XMLDSS schemas IS1 , . . . , ISn . The initial global schema, GS1 , is IS1 . IS2 is then integrated with GS1 producing GS2 . The integration of ISi with GSi−1 to produce GSi pro- ceeds until i = n. This integration consists of first an expansion of GSi−1 with the constructs from ISi that it is missing (again via a growing and a shrinking phase) and then a restructuring, using the algorithm of Section 4.2, of ISi with the resulting schema GSi . 4.4 Interacting with XML data sources In our framework, XML data sources are accessed using an XMLDSS wrap- per. This has SAX and DOM versions for XML files, supporting a subset of XPath. There is also a wrapper over the eXist XML repository which translates IQL queries representing (possibly nested) select-project-join-union queries into (possibly nested) XQuery FLWR expressions. The XML wrapper can be used in three different settings: (i) When a source XMLDSS schema S1 has been transformed into a target XMLDSS schema S2 , the resulting pathway S1 → S2 can be used to translate an IQL query expressed on S2 to an IQL query on S1 , and the XML wrapper of the XML data source corresponding to S1 can be used to retrieve the necessary data for answering the DISWEB'06 287 query. (ii) In the integration of multiple data sources with schemas S1 , . . . , Sn under a virtual global schema GS, AutoMed’s Global Query Processor can pro- cess an IQL query expressed on GS in cooperation with the XML wrappers for the data sources corresponding to the Si . (iii) In a materialised data transforma- tion or data integration setting, where the XML wrapper(s) of the data source(s) retrieve the data and the XML wrapper of the target schema materialises the data into the target schema format. 5 Handling Multiple Ontologies We now discuss how our approach can also handle XMLDSS schemas that are linked to different ontologies. These may be connected either directly via an AutoMed transformation pathway, or via another ontology (e.g. an ‘upper’ on- tology) to which both ontologies are connected by an AutoMed pathway. Consider in particular two XMLDSS schemas S1 and S2 that are semantically linked by two sets of correspondences C1 and C2 to two ontologies R1 and R2 . Suppose that there is an articulation between R1 and R2 , in the form of an AutoMed pathway between them. This may be a direct pathway R1 → R2 . Alternatively, there may be two pathways R1 → RGeneric and R2 → RGeneric linking R1 and R2 to a more general ontology RGeneric , from which we can derive a pathway R1 → RGeneric → R2 (due to the reversibility of pathways). In both cases, the pathway R1 → R2 can be used to transform the correspondences C1 expressed w.r.t. R1 to a set of correspondences C10 expressed on R2 . This is using the query translation algorithm mentioned in Section 3 which performs query unfolding using the delete, contract and rename steps in R1 → R2 . The result is two XMLDSS schemas S1 and S2 that are semantically linked by two sets of correspondences C10 and C2 to the same ontology R2 . Our approach described for a single ontology in Section 4 can now be applied. There is a proviso here that the new correspondences C10 must conform syntactically to the correspondences accepted as input by the schema conformance process of Section 4.1 i.e. their syntax is as described in the first paragraph of Section 4.1. Determining necessary conditions for this to hold, and extending our approach to handle a more expressive set of correspondences, are areas of future work. 6 Concluding Remarks This paper has discussed the automatic transformation and integration of XML data sources, making use of known correspondences between them and one or more ontologies expressed as RDFS schemas. The novelty of our approach lies in the use of XML-specific graph restructuring techniques in combination with correspondences from XML schemas to the same or different ontologies. The approach promotes the reuse of correspondences to ontologies and mappings between ontologies. It is applicable on any XML data source, be it an XML doc- ument or an XML database. The data source does not need to have an accom- 288 Data Integration and the Semantic Web panying DTD or XML Schema, although if this is available it is straightforward to translate such a schema in our XMLDSS schema type. The schema conformance algorithm handles 1-1 mappings between XMLDSS and RDFS constructs, enriched with containment relationships through the use of subclass/superclass and subproperty/superproperty RDFS constraints. This semantic reconciliation of the data source schemas is followed by their struc- tural reconciliation by the schema restructuring algorithm, which handles 1-1 mappings between XMLDSS schemas, utilising the constraints defined in the correspondences. Extending our approach to be capable of utilising 1:n, n:1 and more complex mappings, is a matter of ongoing work. To this end, and at the same time aiming to maintain the current separation of semantic and struc- tural schema reconciliation, we are currently extending the schema conformance algorithm. References 1. B. Amann, C. Beeri, I. Fundulaki, and M. Scholl. Ontology-based integration of XML web resources. In Proc. International Semantic Web Conference 2002, pages 117–131, 2002. 2. P. Buneman, L. Libkin, D. Suciu, V. Tannen, and L. Wong. Comprehension syntax. SIGMOD Record, 23(1):87–96, 1994. 3. V. Christophides and et. al. The ICS-FORTH SWIM: A powerful Semantic Web integration middleware. In Proc. SWDB’03, 2003. 4. I. F. Cruz and H. Xiao. Using a layered approach for interoperability on the Semantic Web. In Proc. WISE’03, pages 221–231, 2003. 5. I. F. Cruz, H. Xiao, and F. Hsu. An ontology-based framework for XML semantic integration. In Proc. IDEAS’04, pages 217–226, 2004. 6. M. Friedman, A. Levy, and T. Millstein. Navigational plans for data integration. In Proc. of the 16th National Conference on Artificial Intelligence, pages 67–73. AAAI, 1999. 7. R. Goldman and J. Widom. DataGuides: enabling query formulation and opti- mization in semistructured databases. In Proc. VLDB’97, pages 436–445, 1997. 8. E. Jasper, A. Poulovassilis, and L. Zamboulis. Processing IQL queries and migrat- ing data in the AutoMed toolkit. AutoMed Tech. Rep. 20, June 2003. 9. E. Jasper, N. Tong, P. Brien, and A. Poulovassilis. View generation and optimisa- tion in the AutoMed data integration framework. In Proc. 6th International Baltic Conference on Databases & Information Systems, Riga, Latvia, June 2004. 10. L. V. S. Lakshmanan and F. Sadri. XML interoperability. In In Proc. of WebDB’03, pages 19–24, June 2003. 11. P. Lehti and P. Fankhauser. XML data integration with OWL: Experiences and challenges. In Proc. Symposium on Applications and the Internet (SAINT 2004), Tokyo, 2004. 12. M. Lenzerini. Data integration: A theoretical perspective. In Proc. PODS’02, pages 233–246, 2002. 13. J. Madhavan and A. Halevy. Composing mappings among data sources. In Proc. VLDB’03, pages 572–583, 2003. 14. P. McBrien and A.Poulovassilis. Defining peer-to-peer data integration using both as view rules. In Proc. Workshop on Databases, Information Systems and Peer-to- Peer Computing (at VLDB’03), Berlin, 2003. DISWEB'06 289 15. P. McBrien and A. Poulovassilis. Data integration by bi-directional schema trans- formation rules. In Proc. ICDE’03. ICDE, March 2003. 16. L. Popa, Y. Velegrakis, R. Miller, M. Hernandez, and R. Fagin. Translating web data. In Proc. VLDB’02, pages 598–609, 2002. 17. E. Rahm and P. Bernstein. A survey of approaches to automatic schema matching. VLDB Journal, 10(4):334–350, 2001. 18. C. Reynaud, J. Sirot, and D. Vodislav. Semantic integration of XML heterogeneous data sources. In Proc. IDEAS, pages 199–208, 2001. 19. P. Rodriguez-Gianolli and J. Mylopoulos. A semantic approach to XML-based data integration. In Proc. ER’01, pages 117–132, 2001. 20. H. Su, H. Kuno, and E. A. Rudensteiner. Automating the transformation of XML documents. In Proc. WIDM’01, pages 68–75, 2001. 21. W3C. Guide to the W3C XML specification (“XMLspec”) DTD, version 2.1, June 1998. 22. W3C. XML Schema Specification. http://www.w3.org/XML/Schema, May 2001. 23. X. Yang, M. Lee, and T.W.Ling. Resolving structural conflicts in the integration of XML schemas: A semantic approach. In Proc. ER’03, pages 520–533, 2003. 24. L. Zamboulis. XML data integration by graph restructuring. In Proc. BNCOD’04, LNCS 3112, pages 57–71, 2004. 25. L. Zamboulis and A. Poulovassilis. Using AutoMed for XML data transformation and integration. In Proc. DIWeb’04 (at CAiSE’04), Riga, Latvia, June 2004. 26. L. Zamboulis and A. Poulovassilis. Information sharing for the Semantic Web — a schema transformation approach. AutoMed Tech. Rep. 31, February 2006.