<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">SEEKing Knowledge in Legacy Information Systems to Support Interoperability</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Joachim</forename><surname>Hammer</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Mark</forename><surname>Schmalz</surname></persName>
						</author>
						<author>
							<persName><forename type="first">William</forename><surname>O'brien</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Sangeetha</forename><surname>Shekar</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Nikhil</forename><surname>Haldevnekar</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Dept. of Computer &amp; Information Science</orgName>
								<orgName type="institution">Engineering</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department" key="dep1">Rinker</orgName>
								<orgName type="department" key="dep2">School of Building Construction</orgName>
								<orgName type="institution">University of Florida Gainesville</orgName>
								<address>
									<postCode>32605</postCode>
									<region>FL</region>
									<country>U.S.A. ¥</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="institution">University of Florida</orgName>
								<address>
									<postCode>32634-6134</postCode>
									<settlement>Gainesville</settlement>
									<region>FL</region>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">SEEKing Knowledge in Legacy Information Systems to Support Interoperability</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B1ACAA94083547A17EC07ADE3FB06FCD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T01:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The SEEK project (Scalable Extraction of Enterprise Knowledge) is developing methodologies to overcome the problems of assembling knowledge resident in numerous legacy information systems by enabling rapid connection to, and privacy-constrained filtering of, legacy data and applications with little programmatic setup. In this report we outline our use of data reverse engineering and code analysis techniques to automatically infer as much as possible the schema and semantics of a legacy information system. We illustrate the approach using an example from our construction supply chain testbed.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">MOTIVATION</head><p>We are developing methodologies and algorithms to facilitate discovery and extraction of enterprise knowledge from legacy sources. These capabilities are being implemented in a toolkit called SEEK (Scalable Extraction of Enterprise Knowledge). SEEK is being developed as part of a larger, multi-disciplinary research project to develop theory and methodologies in support of computerized decision and negotiation support across a network of firms (general overview in <ref type="bibr" target="#b5">[6]</ref>). SEEK is not meant as a replacement for wrapper or mediator development toolkits. Rather, it complements existing tools by providing input about the contents and structure of the legacy source that has so far been supplied manually by domain experts. This streamlines the process and makes wrapper development scalable.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> illustrates the need for knowledge extraction tools in support of wrapper development in the context of a supply chain. There are many firms (principally, subcontractors and suppliers), and each firm contains legacy data used to manage internal processes. This data is also useful as input to a project level decision support tool. However, the large number of firms working on a project makes it likely that there will be a high degree of physical and semantic heterogeneity in their legacy systems. This implies practical difficulties in connecting firms' data and systems with enterprise-level decision support tools. It is the role of the SEEK toolkit to help establish the necessary connections with minimal burden on the underlying firms, which often have limited technical expertise. The SEEK wrappers shown in Fig. <ref type="figure" target="#fig_0">1</ref> are wholly owned by the firm they are accessing and hence provide a safety layer between the source and end user. Security can be further enhanced by deploying the wrappers in a secure hosting infrastructure at an ISP, for example, as shown in the figure <ref type="figure">.</ref> We note that SEEK is not intended to be a generalpurpose data extraction tool: SEEK extracts a narrow range of data and knowledge from heterogeneous sources. Current instantiations of SEEK are designed to extract the limited range of information needed by these process models to support project optimization.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">SEEK APPROACH TO KNOWLEDGE EXTRATCION</head><p>SEEK applies Data Reverse Engineering (DRE) and Schema Matching (SM) processes to legacy database(s), to produce a source wrapper for a legacy source. The source wrapper will be used by another component (for the analysis component in Figure <ref type="figure" target="#fig_0">1</ref>) wishing to communicate and exchange information with the legacy system. First SEEK generates a detailed description of the legacy source, including entities, relationships, application-specific meanings of the entities and relationships, business rules, data formatting and reporting constraints, etc. We collectively refer to this information as enterprise knowledge. The extracted enterprise knowledge forms a knowledgebase that serves as input for subsequent steps. In particular, DRE connects to the underlying DBMS to extract schema information (most data sources support some form of Call-Level Interface such as JDBC). The schema information from the database is semantically enhanced using clues extracted by the semantic analyzer from available application code, business reports, and, in the future, perhaps other electronically available information that may encode business data such as e-mail correspondence, corporate memos, etc. It has been our experience (through visits with representatives from the construction and manufacturing domains) that such application code exists and can be made available electronically. Second, the semantically enhanced legacy source schema must be mapped into the domain model (DM) used by the application(s) that want(s) to access the legacy source. This is done using a schema mapping process that produces the mapping rules between the legacy source schema and the application domain model. In addition to the domain model, the schema mapper also needs access to the domain ontology (DO) describing the model.</p><p>Finally, the extracted legacy schema and the mapping rules provide the input to the wrapper generator (not shown), which produces the source wrapper. In this paper, we focus on our implementation of the DRE algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Data Reverse Engineering</head><p>Data reverse engineering (DRE) is defined as the application of analytical techniques to one or more legacy data sources to elicit structural information (e.g., term definitions, schema definitions) from the legacy source(s) in order to improve the database design or produce missing schema documentation. So far in SEEK, we are applying DRE to relational databases only. However, since the relational model has only limited semantic expressability, in addition to the schema, our DRE algorithm generates an E/R-like representation of the entities and relationships that are not explicitly defined in the legacy schema (but which exist implicitly). Our approach to data reverse engineering for relational sources is based on existing algorithms by Chiang <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref> and Petit <ref type="bibr" target="#b7">[8]</ref>. However, we have improved their methodologies in several ways, most importantly to reduce the dependency on human input and to eliminate some of the limitations of their algorithms (e.g., consistent naming of key attributes, legacy schema in 3-NF).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Dictionary Extraction</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Inclusion Dependency Mining</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Relation Classification</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Entity Identification</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Attribute Classification</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Knowledge Encoder</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Relationship Classification</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>DB Interface Module</head><p>To Schema Matcher Our DRE algorithm is divided into schema extraction and semantic analysis, which operate in interleaved fashion. An overview of the two algorithms, which are comprised of eight steps, is shown in Figure <ref type="figure" target="#fig_1">2</ref>. In addition to the modules that execute each of the eight steps, the architecture in Figure <ref type="figure" target="#fig_3">3</ref> includes three support components: the configurable Database Interface Module (upper-right hand corner), which provides connectivity to the underlying legacy source. Note that this component is the ONLY source-specific component in the architecture: in order to perform knowledge extraction from different sources, only the interface module needs to be changed. The Knowledge Encoder (lower right-hand corner) represents the extracted knowledge in the form of an XML document so that it can be shared with other components in the SEEK architecture (e.g., the semantic matcher). The Metadata Repository is internal to DRE and used to store intermediate run-time information needed by the algorithms including user input parameters, the abstract syntax tree for the code (e.g., from a previous invocation), etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Metadata Repository</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Queries</head><p>We now highlight each of the eight steps and related activities outlined in Figure <ref type="figure" target="#fig_3">3</ref> using an example from our construction supply chain testbed. For a detailed description of our algorithm, refer to <ref type="bibr" target="#b2">[3]</ref>. For simplicity, we assume without lack of generality or specificity that only the following relations exist in the MS-Project application, which will be discovered using DRE (for a description of the entire schema refer to <ref type="bibr" target="#b4">[5]</ref>): </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>MSP-Project</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Step 1: AST Generation</head><p>We start by creating an Abstract Syntax Tree (AST) shown in Figure <ref type="figure" target="#fig_3">3</ref>. The AST will be used by the semantic analyzer for code exploration during step 3. Our objective in AST generation is to be able to associate "meaning" with program variables. Format strings in input/output statements contain semantic information that can be associated with the variables in the input/output statement. This program variable in turn may be associated with a column of a table in the underlying legacy database. Step 2. Dictionary Extraction.</p><p>The goal of step 2 is to obtain the relation and attribute names from the legacy source. This is done by querying the data dictionary, stored in the underlying database in the form of one or more system tables. Otherwise, if primary key information cannot be retrieved directly from the data dictionary, the algorithm passes the set of candidate keys along with predefined "rule-out" patterns to the code analyzer. The code analyzer searches for these patterns in the application code and eliminates those attributes from the candidate set, which occur in the rule-out pattern. The rule-out patterns, which are expressed as SQL queries, occur in the application code whenever programmer expects to select a SET of tuples. If, after the code analysis, not all primary key can be identified, the reduced set of candidate keys is presented to the user for final primary key selection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Result.</head><p>In the example DRE application, the following relations and their attributes were obtained from the MS-Project database: Step</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>MSP-Project</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3: Code Analysis</head><p>The objective of step 3, code analysis, is twofold: (1) augment entities extracted in step 2 with domain semantics, and (2) identify business rules and constraints not explicitly stored in the database, but which may be important to the wrapper developer or application program accessing the legacy source. Our approach to code analysis is based on code analysis, which includes slicing <ref type="bibr" target="#b3">[4]</ref> and pattern matching <ref type="bibr" target="#b6">[7]</ref>. The first step is the pre-slicing. From the AST of the application code, the pre-slicer identifies all the nodes corresponding to input, output and embedded SQL statements. It appends the statement node name, and identifier list to an array as the AST is traversed in pre-order. For example, for the AST in Figure <ref type="figure" target="#fig_3">3</ref>, the array contains the following information depicted in Table <ref type="table" target="#tab_1">1</ref>. The identifiers that occur in this data structure maintained by the pre-slicer form the set of slicing variables. The code slicer and analyzer, which represent steps two and three respectively, are executed once for each slicing variable identified by the pre-slicer. In the above example, the slicing variables that occur in SQL and output statements are aValue and cValue. The direction of slicing is fixed as backwards or forwards depending on whether the variable in question is part of a output (backwards) or input (forwards) statement. The slicing criterion is the exact statement (SQL or input or output) node that corresponds to the slicing variable.</p><p>During code slicing sub-step we traverse the AST for the source code and retain only those nodes that have an occurrence of the slicing variable in sub-tree. This results in a reduced AST, which is shown in Fig. <ref type="figure" target="#fig_5">4</ref>. During the analysis sub-step, our algorithm extracts the information shown in Table <ref type="table" target="#tab_2">2</ref>, while traversing the reduced AST in pre-order. 1. If a dcln node is encountered, the data type of the identifier can be learned. 2. embSQL contain the mapping information of identifier name to corresponding column name and table name in the database. 3. Printf/scanf nodes contain the mapping information from the text string to the identifier. In other words we can extract the 'meaning' of the identifier from the text string. The results of analysis sub-step are appended to a result report file. After the code slicer and analyzer have been invoked on every slicing variable identified by the pre-slicer, the results report file is presented to the user. The user can base his decision of whether to perform further analysis based on the information extracted so far. If the user decides not to perform further analysis, code analysis passes control to the inclusion dependency detection module.</p><p>It is important to note, that we identify enterprise knowledge by matching templates against code fragments in the AST. So far, we have developed patterns for discovering business rules which are encoded in loop structures and/or conditional statements and mathematical formulae, which are encoded in loop structures and/or assignment statements. Note, the occurrence of an assignment statement itself does not necessarily indicate the presence of a mathematical formula, but the likelihood increases significantly if the statement contains one of the "slicing variables."</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Step 4. Discovering Inclusion Dependencies.</head><p>After extraction of the relational schema in step 2, the goal of step 4 is to identify constraints to help classify the extracted relations, which represent both the real-world entities and the relationships among them. This is done using inclusion dependencies (INDs), which indicate the existence of interrelational constraints including class/subclass relationships.</p><p>Let A and B be two relations, and X and Y be attributes or a set of attributes of A and B respectively. An inclusion dependency A.X &lt;&lt; B.Y denotes that a set of values appearing in A.X is a subset of B.Y. Inclusion dependencies are discovered by examining all possible subset relationships between any two relations A and B in the legacy source.</p><p>Without additional input from the domain expert, inclusion dependencies can be identified in an exhaustive manner as follows: for each pair of relations A and B in the legacy source schema, compare the values for each non-key attribute combination X in B with the values of each candidate key attribute combination Y in A (note that X and Y may be single attributes). An inclusion dependency B.X&lt;&lt;A.Y may be present if: 1. X and Y have same number of attributes. 2. X and Y must have pair wise domain compatibility.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">B.X ⊆ A.Y</head><p>In order to check the subset criteria (3), we have designed the following generalized SQL query templates, which are instantiated for each pair of relations and attribute combinations and run against the legacy source:</p><formula xml:id="formula_0">C1 = C2 = SELECT count (*) SELECT count (*) FROM R1 FROM R2 WHERE U NOT IN WHERE V NOT IN (SELECT V (SELECT U FROM R2); FROM R1);</formula><p>If C1 is zero, we can deduce that there may exist an inclusion dependency R1.U &lt;&lt; R2.V; likewise, if C2 is zero there may exist an inclusion dependency R2.V &lt;&lt; R1.U. Note that it is possible for both C1 and C2 to be zero. In that case, we can conclude that the two sets of attributes U and V are equal.</p><p>The worst-case complexity of this exhaustive search, given N tables and M attributes per table (NM total attributes), is O(N2M2). However, we reduce the search space in those cases where we can identify equi-join queries in the application code (during semantic analysis). Each equi-join query allows us to deduce the existence of one or more inclusion dependencies in the underlying schema. In addition, using the results of the corresponding count queries we can also determine the "direction" of the dependencies. This allows us to limit our exhaustive searching to only those relations not mentioned in the extracted queries.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Result: Inclusion dependencies are as follows:</head><p>The last two inclusion dependencies are removed since they are implicitly contained in the inclusion dependencies listed in lines 2, 3 and 4 using the transitivity relationship.</p><p>Step 5. Classification of the Relations.</p><p>When reverse-engineering a relational schema, it is important to understand that due to the limited expressability of the relational model, all real-world entities are represented as relations irrespective of their types and role in the model. The goal of this step is to identify the different "types" of relations, some of which correspond to actual real-world entities while others represent relationships among them.</p><p>In this step all the relations in the database are classified into one of four types -strong, regular, weak or specific. Identifying different relations is done using the primary key information obtained in step 2 and the inclusion dependencies from step 4. Intuitively, a strong entity-relation represents a real-world entity whose members can be identified exclusively through its own properties. A weak entity-relation represents an entity that has no properties of its own that can be used to identify its members. In the relation model, the primary keys of weak entity-relations usually contain primary key attributes from other (strong) entity-relations. Both regular and specific relations are relations that represent relationships between two entities in the real world (rather then the entities themselves). However, there are instances when not all of the entities participating in an (n-ary) relationship are present in the database schema (e.g., one or more of the relations were deleted as part of the normal database schema evolution process). While reverse engineering the database, we identify such relationships as special relations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Result:</head><p>Strong Entities: MSP_Projects Weak Entities: MSP_Resources, MSP_Tasks, MSP_Availability Regular Relationship: MSP-Assignment</p><p>Step 6. Classification of the Attributes.</p><p>We classify attributes as (a) PK or FK (from DRE-1 or DRE-2), (b) Dangling or General, or (c) Non-Key (rest).</p><p>Result: Table <ref type="table" target="#tab_3">3</ref> illustrates attributes obtained from the example legacy source. Step 7. Identify Entity Types.</p><p>Strong (weak) entity relations obtained from step 5 are directly converted into strong (resp. weak) entities.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>……Figure 1 :</head><label>1</label><figDesc>Figure 1: Using the SEEK toolkit to improve coordination in extended enterprises.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Conceptual overview of the DRE algorithm.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>[PROJ_ID, ...] MSP-Availability[PROJ_ID, AVAIL_UID, ...] MSP-Resources [PROJ_ID, RES_UID, ...] MSP-Tasks J_ID, TASK_UID, ...] [PRO MSP-Assignment [PROJ_ID, ASSN_UID, ...] In order to illustrate the code analysis and how it enhances the schema extraction, we refer the reader to the following C code fragment representing a simple, hypothetical interaction with the MS Project database. char *aValue, *cValue; int flag = 0; int bValue = 0; EXEC SQL SELECT A,C INTO :aValue, :cValue FROM Z WHERE B = :bValue; if (cValue &lt; aValue) { flag = 1; } printf("Task Start Date %s ", aValue); printf("Task Finish Date %s ", cValue);</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Application-specific code analysis via AST decomposition and code slicing. The direction of slicing is backwards (forward) if the variable in question is in an output (resp. input or declaration) statement.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>[PROJ_ID, ...] MSP-Availability[PROJ_ID, AVAIL_UID, ...] MSP-Resources [PROJ_ID, RES_UID, ...] MSP-Tasks [PROJ_ID, TASK_UID, ...] MSP-Assignment [PROJ_ID, ASSN_UID, ...]</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Reduced AST.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>Information maintained by the pre-slicer.</figDesc><table><row><cell>Node</cell><cell>Statement</cell><cell>Text String</cell><cell cols="2">Identifiers Direction</cell></row><row><cell>number</cell><cell></cell><cell>(for print</cell><cell></cell><cell>of Slicing</cell></row><row><cell></cell><cell></cell><cell>nodes)</cell><cell></cell><cell></cell></row><row><cell>2</cell><cell>embSQL</cell><cell>-----</cell><cell>aValue</cell><cell>Backwards</cell></row><row><cell></cell><cell>(Embedded</cell><cell></cell><cell>cValue</cell><cell></cell></row><row><cell></cell><cell>SQL node)</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>Information inferred during the analysis sub-step.</figDesc><table><row><cell>Identifier</cell><cell>Meaning</cell><cell cols="2">Possible Business Rule</cell></row><row><cell>Name</cell><cell></cell><cell></cell></row><row><cell>aValue</cell><cell>Task Start</cell><cell cols="2">if (cValue &lt; aValue)</cell></row><row><cell></cell><cell>Date</cell><cell>{</cell></row><row><cell></cell><cell></cell><cell>}</cell></row><row><cell>cValue</cell><cell>Task</cell><cell cols="2">if (cValue &lt; aValue)</cell></row><row><cell></cell><cell>Finish</cell><cell>{</cell></row><row><cell></cell><cell>Date</cell><cell>}</cell></row><row><cell>Data type</cell><cell cols="2">Column Name</cell><cell>Table Name in</cell></row><row><cell></cell><cell cols="2">in Source</cell><cell>Source</cell></row><row><cell>Char * =&gt;</cell><cell>A</cell><cell>Z</cell></row><row><cell>string</cell><cell></cell><cell></cell></row><row><cell>Char * =&gt;</cell><cell>C</cell><cell>Z</cell></row><row><cell>string</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 .</head><label>3</label><figDesc>Example of attribute classification from MS-Project legacy source.</figDesc><table><row><cell></cell><cell>PKA</cell><cell>DKA</cell><cell>GKA</cell><cell>FKA</cell><cell>NKA</cell></row><row><cell>MS-Project</cell><cell>Proj_ID</cell><cell></cell><cell></cell><cell></cell><cell>All</cell></row><row><cell>MS-</cell><cell>Proj_ID</cell><cell>Res_uid</cell><cell></cell><cell></cell><cell>Remaining</cell></row><row><cell>Resources</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>Attributes</cell></row><row><cell>MS-Tasks</cell><cell>Proj_ID</cell><cell>Task_uid</cell><cell></cell><cell></cell></row><row><cell>MS-</cell><cell>Proj_ID</cell><cell>Avail_uid</cell><cell></cell><cell>Res_uid+</cell></row><row><cell>Availability</cell><cell></cell><cell></cell><cell></cell><cell>Proj_ID</cell></row><row><cell>MS-</cell><cell>Proj_ID</cell><cell></cell><cell>Assn_uid</cell><cell>Res_uid+</cell></row><row><cell>Assignment</cell><cell></cell><cell></cell><cell></cell><cell>Proj_ID,</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>Task_uid</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>+</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>Proj_ID</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">MSP_Assignment[Task_uid,Proj_ID] &lt;&lt; MSP_Tasks [Task_uid,Proj_ID]</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">MSP_Assignment[Res_uid,Proj_ID] &lt;&lt; MSP_Resources[Res_uid,Proj_ID]</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">MSP_Availability [Res_uid,Proj_ID] &lt;&lt; MSP_Resources [Res_uid,Proj_ID]</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3"> MSP_Resources [Proj_ID]  &lt;&lt; MSP_Project[Proj_ID]   </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4"> MSP_Tasks [Proj_ID]  &lt;&lt; MSP_Project[Proj_ID]   </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5"> MSP_Assignment [Proj_ID]  &lt;&lt; MSP_Project[Proj_ID]   </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6"> MSP_Availability [Proj_ID]  &lt;&lt; MSP_Project[Proj_ID]   </note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGEMENTS</head><p>This material is based upon work supported by the National Science Foundation under grant numbers CMS-0075407 and CMS-0122193. The authors also thank Dr. Raymond Issa for his valuable comments and feedback on a draft of this paper.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Result:</head><p>The following entities were classified: Strong entities:</p><p>MSP_Project with Proj_ID as its key.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Weak entities:</head><p>MSP_Tasks with Task_uid as key and MSP_Project as its owner.</p><p>MSP_Resources with Res_uid as key and MSP_Project as its owner.</p><p>MSP_Availability with Avail_uid as key and MSP_Resources as owner.</p><p>Step 8. Identify Relationship Types.</p><p>The inclusion dependencies discovered in step 4 form the basis for determining the relationship types among the entities identified above. This is a two-step process: Thus, MSP_Assignment becomes an M:N relationship between MSP_Tasks and MSP_Resources.</p><p>At the end of Step 8, DRE has extracted the following schema information from the legacy database:</p><p>• Names and classification of all entities and attributes.</p><p>• Primary and foreign keys.</p><p>• Data types.</p><p>• Simple constraints (e.g., unique) and explicit assertions.</p><p>• Relationships and their cardinalities. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">STATUS AND FUTURE WORK</head><p>We have manually tested our approach for a number of scenarios and domains (including construction, manufacturing and health care) to validate our knowledge extraction algorithm and to estimate how much user input is required. In addition, we have also conducted experiments using nine different database applications that were created by students during course projects. The experimental results so far are encouraging: the DRE algorithm was able to reverse engineer all of the sample legacy sources encountered so far. When coupled with semantic analysis, human input is reduced compared to existing methods. Instead the user is presented with clues and guidelines that lead to the augmentation of the schema with additional semantic knowledge. The SEEK prototype is being extended using sample data from a large building construction project on the University of Florida campus in cooperation with the manager, Centex Rooney Inc., and several subcontractors or suppliers. This data testbed will support much more rigorous testing of the SEEK toolkit. Other plans for the SEEK toolkit are: • Develop a formal representation for the extracted knowledge.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>Develop a matching tool capable of producing mappings between two semantically related yet structurally different schemas. Currently, schema matching is performed manually, which is a tedious, error-prone, and expensive process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>Integrate SEEK with a wrapper development toolkit to determine if the extracted knowledge is sufficiently rich semantically to support compilation of legacy source wrappers for our construction testbed.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A knowledge-based system for performing reverse engineering of relational database</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H</forename><surname>Chiang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Decision Support Systems</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="295" to="312" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Reverse engineering of relational databases: Extraction of an EER model from a relational database</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">H L</forename><surname>Chiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Barron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">C</forename><surname>Storey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data and Knowledge Engineering</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="107" to="142" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Knowledge Extraction in the SEEK Project</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schmalz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>O'brien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shekar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Haldavnekar</surname></persName>
		</author>
		<idno>TR-0214</idno>
		<imprint>
			<date type="published" when="2002-06">June 2002</date>
			<biblScope unit="page" from="32611" to="36120" />
			<pubPlace>Gainesville, FL</pubPlace>
		</imprint>
		<respStmt>
			<orgName>University of Florida</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The use of program dependence graphs in software engineering</title>
		<author>
			<persName><forename type="first">S</forename><surname>Horwitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Reps</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourteenth International Conference on Software Engineering</title>
				<meeting>the Fourteenth International Conference on Software Engineering<address><addrLine>Melbourne, Australia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Microsoft Project 2000 Database Design Diagram</title>
		<ptr target="http://www.microsoft.com/office/project/prk/2" />
		<imprint/>
		<respStmt>
			<orgName>Microsoft Corp.</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">SEEK: Accomplishing Enterprise Information Integration Across Heterogeneous Sources</title>
		<author>
			<persName><forename type="first">W</forename><surname>O'brien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Issa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Schmalz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Geunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">X</forename><surname>Bai</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
			<publisher>ITCON -Journal of Information Technology in Construction</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A Framework for Source Code Search Using Program Patterns</title>
		<author>
			<persName><forename type="first">S</forename><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Prakash</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Software Engineering</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="463" to="475" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Towards the Reverse Engineering of Denormalized Relational Databases</title>
		<author>
			<persName><forename type="first">J.-M</forename><surname>Petit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Toumani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-F</forename><surname>Boulicaut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kouloumdjian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twelfth International Conference on Data Engineering (ICDE)</title>
				<meeting>the Twelfth International Conference on Data Engineering (ICDE)<address><addrLine>New Orleans, LA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1996">1996</date>
			<biblScope unit="page" from="218" to="227" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
