<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">EmbDI: Generating Embeddings for Relational Data Integration</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Riccardo</forename><surname>Cappuzzo</surname></persName>
							<email>riccardo.cappuzzo@eurecom.fr</email>
							<affiliation key="aff0">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Paolo</forename><surname>Papotti</surname></persName>
							<email>paolo.papotti@eurecom.fr</email>
							<affiliation key="aff0">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Saravanan</forename><surname>Thirumuruganathan</surname></persName>
							<email>sthirumuruganathan@hbku.edu.qa</email>
							<affiliation key="aff2">
								<orgName type="laboratory">QCRI</orgName>
								<address>
									<country key="QA">Qatar</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">EmbDI: Generating Embeddings for Relational Data Integration</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">483A242BE7331C75B294D2BAAC4D1A20</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T02:53+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>Data Integration, Word Embeddings</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Deep learning techniques have been used with promising results for data integration problems. Some methods use pre-trained embeddings that were trained on a large corpus such as Wikipedia. However, they may not always be an appropriate choice for enterprise datasets with custom vocabulary. Other methods adapt techniques from natural language processing to obtain embeddings for the enterprise's relational data. However, this approach blindly treats a tuple as a sentence, thus losing a large amount of contextual information present in the tuple. We propose algorithms for obtaining local embeddings that are effective for data integration tasks on relational databases. We describe a graph-based representation that allows the specification of a rich set of relationships inherent in the relational world. Then, we propose how to derive sentences from such a graph that effectively "describe" the similarity across elements (tokens, attributes, rows) in the datasets. The embeddings are learned based on such sentences. Our experiments show that our framework, EmbDI, produces promising results for data integration tasks such as entity resolution, both in supervised and unsupervised settings.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The problem of data integration concerns the combination of information from heterogeneous relational data sources, which is recognized as an expensive task for humans <ref type="bibr" target="#b0">[1]</ref>. While traditional approaches require substantial effort from domain scientists to generate features and labeled data or domain specific rules, there has been increasing interest in achieving accurate data integration with deep learning methods to reduce the human effort. Embeddings have been successfully used for this goal in data integration tasks such as entity resolution <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>, schema matching <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>, identification of related concepts <ref type="bibr" target="#b9">[10]</ref>, and data curation in general <ref type="bibr" target="#b0">[1]</ref>. Typically, these works fall into two dominant paradigms based on how they obtain word embeddings. The first is to reuse pre-trained word embeddings computed on a generic corpus for a given task. The second is to build local word embeddings that are specific to the dataset. These methods treat each tuple as a sentence by reusing the same techniques for learning word embeddings employed in natural language processing.  However, both approaches fall short in some circumstances. Enterprise datasets contain custom vocabulary, as in the small datasets in the left-hand side of Figure <ref type="figure" target="#fig_0">1</ref>. The pre-trained embeddings do not capture the semantics expressed by these datasets and do not contain embeddings for the word "Rick". Approaches that treat a tuple as a sentence miss a number of signals such as attribute boundaries, integrity constraints, and so on. Moreover, existing approaches do not consider the generation of embeddings from heterogeneous datasets, with different attributes and alternative value formats. These observations motivate the generation of local embeddings for the relational datasets at hand. We advocate for the design of such local embeddings that leverage both the relational nature of the data and the downstream task of data integration.</p><p>Tuples are not sentences. Simply adapting embedding techniques originally developed for textual data ignores the richer set of semantics inherent in relational data. Consider a cell value 𝑡[𝐴 𝑖 ] of an attribute 𝐴 𝑖 in tuple 𝑡, e.g., "Mike" (in italic) in the first relation from the top. Conceptually, it has a semantic connections with both other attributes of tuple 𝑡 (such as "iPad 4th") and other values from the domain of attribute 𝐴 𝑖 (such as "Paul", also in italic in the figure). Embedding generation must span different datasets. Embeddings must be trained using heterogeneous datasets, so that they can meaningfully leverage and surface similarity across data sources. A notion of similarity between different types of entities, such as tuples and attributes, must be developed. Tuple-tuple and attribute-attribute similarity are important features for data integration.</p><p>There are multiple challenges to overcome. First, it is not clear how to encode the semantics of the relational datasets in the embedding learning process. Second, datasets may share limited amount of information, have different schemas, and contain a different number of tuples. Finally, datasets are often incomplete and noisy. The learning process is affected by low information quality, generating embeddings that do not correctly represent the semantics of the data.</p><p>We introduce EmbDI, a framework for building relational, local embeddings for data integration that introduces a number of innovations to overcome the challenges above. We identify crucial components and propose effective algorithms for instantiating each of them. EmbDI is designed to be modular so that anyone can customize it by plugging in other algorithms and benefit from the continuing improvements from the deep learning and database communities. The two main contributions in our solution are the following.</p><p>1. Graph Construction. We use a compact tripartite graph-based representation of relational datasets that effectively represents syntactic and semantic data relationships. Specifically, we use three types of nodes. Token nodes correspond to the unique values found in the dataset. Record Id nodes (RIDs) represent a unique token for each tuple. Column Id nodes (CIDs) represent a unique token for each column/attribute. These nodes are connected by edges based on the structural relationships in the schema. This graph is a compact representation of the original datasets that highlights overlap and explicitly represent the primitives for data integration tasks, i.e., records and attributes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Embedding Construction.</head><p>We formulate the problem of obtaining local embeddings for relational data as a graph embeddings generation problem. We use random walks to quantify the similarity between neighboring nodes and to exploit metadata such as tuple and attribute IDs. This method ensures that nodes that share similar neighborhoods will be in close proximity in the final embeddings space. The corpus that is used to train our local embeddings is generated by materializing these random walks.</p><p>In this discussion paper, we report results for the entity resolution task and refer the reader to the extended version for more experiments <ref type="bibr" target="#b10">[11]</ref>.</p><p>Outline. Section 2 introduces background about embeddings. Section 3 highlights the main challenges and details the major components of the framework. Section 4 concludes the paper by reporting experiments validating our approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>Embeddings. Embeddings map an entity to a high dimensional real valued vector. The mapping is performed in such a way that the geometric relation between the vectors of two entities represents their co-occurrence/semantic relationship. Algorithms used to learn embeddings rely on the notion of "neighborhood": if two entities are similar, they frequently belong to the same contextually-defined neighborhood. When this occurs, the algorithm forces the vectors that represent the two entities to be close to each other in the vector space.</p><p>Word Embeddings <ref type="bibr" target="#b11">[12]</ref> are trained on a large corpus of text and produce as output a vector space where each word in the corpus is represented by a vector. The vectors for words that occur in similar context -such as SIGMOD and VLDB -are in proximity to each other. Popular architectures for learning embeddings include continuous bag-of-words (CBOW) or skip-gram (SG).</p><p>Node embeddings <ref type="bibr" target="#b12">[13]</ref> map graph nodes to a high dimensional vector space so that the likelihood of preserving node neighborhoods is maximized. One way to achieve this is by performing random walks starting from each node. Node embeddings are often based on the SG model, as it maximizes the probability of observing a node's neighborhood given its embedding. By varying the type of random walks used, one obtains diverse types of embeddings.</p><p>Embeddings for Relational Datasets. Termite <ref type="bibr" target="#b9">[10]</ref> projects tokens from structured and unstructured data into a common representational space that could then be used for identifying related concepts. RetroLive <ref type="bibr" target="#b13">[14]</ref> produces embeddings that combine relational and semantic information through a retrofitting strategy. There has been prior work that adopt embeddings for specific tasks like entity matching <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref> and schema matching <ref type="bibr" target="#b8">[9]</ref>. Our goal is to learn relational embeddings tailored for data integration that can be used for multiple tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Challenges and Proposed Solution</head><p>Consider the scenario where one utilizes pre-trained embeddings, such as word2vec, for the tokens in two small datasets, as reported in Figure <ref type="figure" target="#fig_0">1</ref>. Pre-trained embeddings suffer from a number of issues when we use them to model the relations.</p><p>1. A number of words, such as "Rick", in the dataset are not in the pre-trained embedding. This is especially problematic for enterprise datasets where tokens are often unique and not found in pre-trained embeddings.</p><p>2. Embeddings might contain geometric relationships that exist in the corpus they were trained on, but that are missing in the relational data. For example, the embedding for token "Steve" is closer to tokens "iPad" and "Apple" even though it is not implied in the data.</p><p>3. Relationships that do occur in the data, such as between tokens "Paul" and "Mike", are not observed in the pre-trained vector space.</p><p>Learning local embeddings from the relational data often produces better results. However, computing embeddings for non integrated data sources is a non trivial task. This becomes especially challenging in settings where data is scattered over different datasets with heterogeneous structures, different formats, and only partially overlapping content. Prior approaches express such datasets as sentences to be consumed by word embedding methods. However, we find that these solutions are still sub-optimal for downstream data integration tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Constructing Local Relational Embeddings</head><p>Our framework, EmbDI, consists of three major components, as depicted in the right-hand side of Figure <ref type="figure" target="#fig_0">1</ref>.</p><p>1. In the Graph Construction stage, we transform the relational dataset in a compact tripartite graph that encodes various relationships inherent in it. Tuple and attribute ids are treated as first class citizens.</p><p>2. Given this graph, the next step is Sentence Construction through the use of biased random walks. These walks are carefully constructed to avoid common issues such as rare words and imbalance in vocabulary sizes. This produces as output a series of sentences.</p><p>3. In Embedding Construction, the corpus of sentences is passed to an algorithm for learning word embeddings. Depending on available external information, we optimize the graph and the workflow to improve the embeddings' quality.</p><p>Why construct a Graph? Prior approaches for local embeddings seek to directly apply an existing word embedding algorithm on the relational dataset. Intuitively, all tuples in a relation are modeled as sentences by breaking the attribute boundaries. The corpus of sentences for each tuple in the relation is then used to train the embedding. This approach produces embeddings that are customized to that dataset, but it also ignores signals that are inherent in relational data. We represent the relational data as a graph, thus enabling a more expressive representation with a number of advantages. First, it elegantly handles many of the various relationships between entities that are common in relational datasets. Second, it provides a straightforward way to incorporate external information such as "two tokens are synonyms of each other". Finally, a graph representation enables a unified view over different datasets that is invaluable for learning embeddings for data integration. )︀ edges per tuple. This approach results in the number of edges is quadratic in the number of attributes and ignores other token relationships such as "token 𝑡 1 and token 𝑡 2 belong to the same attribute".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Simple</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Relational Data as Heterogeneous Graph.</head><p>We propose a graph with three types of nodes. Token nodes correspond to the content of each cell in the relation. Multi-word tokens may be represented as a single entity, get split over multiple nodes or use a mix of the two strategies. Record Id nodes (RIDs) represent tuples, Column Id nodes (CIDs) represent columns/attributes. These nodes are connected by edges according to the structural relationships in the schema.</p><p>Consider a tuple 𝑡 with RID 𝑟 𝑡 . Then, nodes for tokens corresponding to 𝑡[𝐴 1 ], . . . , 𝑡[𝐴 𝑚 ] are connected to the node 𝑟 𝑡 . Similarly, all the tokens belonging to a specific attribute 𝐴 𝑖 are connected to the corresponding CID, say 𝑐 𝑖 . This construction is generic enough to be augmented with other types of relationships. Also, if we know that two tokens are synonyms (e.g. via wordnet), this information could be incorporated by reusing the same node for both tokens. Note that a token could belong to different record ids and column ids when two different tuples/attributes share the same token. Numerical values are rounded to a number of significant figures decided by the user, then they are assigned a node like regular categorical values; null values are not represented in the graph.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Graph Traversal by Random Walks.</head><p>To generate the distributed representation of every node, we produce a large number of random walks and gather them in a training corpus where each random walk corresponds to a sentence. Random walks allows a richer and more diverse set of neighborhoods than the encoding of a tuple as a single sentence. For example, a walk starting from node 'Paul' could go to node 𝐴 3 , and then to node 'Rick'. This walk implicitly defines the neighborhood based on attribute co-occurrence. Similarly, the walk from 'Paul' could go to '𝑟 5 ' and then to 'Apple', incorporating the row level relationships. Our approach is agnostic to the specific type of random walk used. To better represent all nodes, we assign a "budget" of random walks to each of them and guarantee that all nodes will be the starting point of at least as many random walks as their budget. After choosing the starting point 𝑇 𝑖 , the random walk is generated by choosing a neighboring RID of 𝑇 𝑖 , 𝑅 𝑗 . The next step in the random walk will then be chosen at random among all neighbors of node 𝑅 𝑗 , for example by moving on 𝐶 𝑎 . Then, a new neighbor of 𝐶 𝑎 will be chosen and the process will continue until the random walk has reached the target length. We use uniform random walks in most of our experiments to guarantee good execution times on large datasets, while providing high quality results.</p><p>Embedding Construction. The generated sentences are then pooled together and used to train the embeddings algorithm. Our approach is agnostic to the actual word embedding algorithm used. We piggyback on the plethora of effective embeddings algorithms such as word2vec, GloVe, fastText, and so on. We discuss the hyperparameters for embedding algorithms such as learning method (either CBOW or Skip-Gram), dimensionality of the embeddings, and size of context window in the full version of the paper.</p><p>Using Embeddings for Integration. Once the embeddings are trained, they can be used for common data integration tasks. We describe unsupervised algorithms that employ the embeddings produced by EmbDI to perform tasks widely studied in data integration. The algorithms exploit the distance between embeddings of Column and Record IDs for schema matching and entity resolution, respectively; details are reported in the full version of the paper <ref type="bibr" target="#b10">[11]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>We show the positive impact of our embeddings for entity resolution, more results on multiple data integration tasks are reported in the full version of the paper. Experiments have been conducted on a laptop with a CPU Intel i7-8550U, 8x1.8GHz cores and 32GB RAM.</p><p>Datasets and Pre-trained Embeddings. We used 8 datasets from the literature <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b15">16]</ref> and a dataset with a larger schema (IM) that we created starting from open data (https:// www.imdb.com/interfaces/, https://grouplens.org/datasets/movielens/). For the majority of the scenarios, less than 10% of the distinct data values are overlapping across the two datasets.</p><p>Pre-trained word embeddings have been obtained from fastText <ref type="bibr" target="#b16">[17]</ref>. We relied on state of the art methods to combine words in tuples and to obtain embeddings for words that are not in the pre-trained vocabulary <ref type="bibr" target="#b1">[2]</ref>.</p><p>Algorithms. We test four algorithms for the generation of local embeddings. All local methods make use of our tripartite graph and exploit record and column IDs in the integration tasks. The first method is Basic, which creates embeddings from permutations of row tokens and sentences with samples of attribute tokens. The second method is Node2Vec <ref type="bibr" target="#b12">[13]</ref>, a widely used algorithm for learning node representation on graphs. Given our graph as input, it learns vectors for all nodes. The third method is Harp <ref type="bibr" target="#b17">[18]</ref>, a state of the art algorithm that learns embeddings for graph nodes by preserving higher-order structural features. This method represents general meta-strategies that build on top of existing neural algorithms to improve performance. The fourth method is EmbDI, as presented in Section 3.1 (https://gitlab.eurecom.fr/cappuzzo/embdi), with walks (sentences) of size 60, 300 dimensions for the embeddings space, the Skip-Gram model in word2vec with a window size of 3, and different tokenization strategies to convert cell values in nodes (details reported in the full paper).</p><p>We also test our local embeddings in the supervised setting with a state of the art ER system (DeER 𝐿 ), comparing its results to the ones obtained with pre-trained embeddings (DeER 𝑃 ). As baseline for the unsupervised case, we use our matching algorithm with pre-trained embeddings (fastTxt).</p><p>Metrics. We measure the quality of the results w.r.t. hand crafted ground truth tuple pairs with precision, recall, and their combination (F-measure). ER Results. We study both unsupervised and supervised settings. To enable baselines to execute these datasets, we aligned the attributes with the ground truth. EmbDI can handle the original scenario where the schemas have not been aligned with a limited decrease in ER quality.</p><p>Results in Table <ref type="table" target="#tab_1">1</ref> for unsupervised settings show that EmbDI-O embeddings obtain the best quality results in three scenarios and second to the best in four cases. In every case, local embeddings obtained from our graph outperform pre-trained ones. For supervised settings, using local embeddings instead of pre-trained ones increases the quality of an existing system. In this case, supervised DeER shows an average 5% absolute improvement in F-measure with 5% of the ground truth passed as training data. The improvements decrease to 4% with more training data (10%). Local embeddings obtained with the Basic method lead to 0 rows matched.</p><p>Compared to Node2Vec and Harp, the execution of EmbDI is much faster and is able to compute local embeddings for all small and medium size datasets in minutes on a commodity laptop. For example, it takes 2 minutes for 7.4k tuples and 19 minutes for 25k tuples versus 40 and 12 minutes with Harp, respectively. EmbDI embedding creation takes on average about 80% of the total execution time, while graph generation takes less than 1%, and sentence creation the remaining 19%.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>1</head><label>1</label><figDesc>Paul r 5 Apple A 4 Samsung r 4 Rick A 3 Paul ... r 5 Paul r 1 iPad_4th A 2 Galaxy r 3 Steve r 3 Galaxy ...</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>3Figure 1 :</head><label>1</label><figDesc>Figure 1: A vector space learned from text (prior methods) and from data (EmbDI).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Approaches. Consider a relation 𝑅 with attributes {𝐴 1 , 𝐴 2 , . . . , 𝐴 𝑚 }. Let 𝑡 be an arbitrary tuple and 𝑡[𝐴 𝑖 ] the value of attribute 𝐴 𝑖 for tuple 𝑡. A naive approach is to create a chain graph where tokens corresponding to adjacent attributes such as 𝑡[𝐴 𝑖 ] and 𝑡[𝐴 𝑖+1 ] are connected. This will result in 𝑚 edges for each tuple. Of course, if two different tuples share the same token, then they will reuse the same node. However, relational algebra is based on set semantics, where the attributes do not have an inherent order. So, simplistically connecting adjacent attributes is doomed to fail. Another extreme is to create a complete subgraph, where an edge exists between all possible pairs of 𝑡[𝐴 𝑖 ] and 𝑡[𝐴 𝑖+1 ]. Clearly, this will result in (︀ 𝑚</figDesc><table><row><cell>2</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 F</head><label>1</label><figDesc>-Measure results for Entity Resolution (ER).</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell cols="2">Unsupervised</cell><cell></cell><cell></cell><cell cols="2">Supervised</cell><cell cols="2">Task specific</cell></row><row><cell></cell><cell>Pretrain</cell><cell></cell><cell></cell><cell>Local</cell><cell></cell><cell></cell><cell cols="2">(5% labelled)</cell><cell cols="2">(5% labelled)</cell></row><row><cell></cell><cell>fast</cell><cell cols="9">EmbDI EmbDI EmbDI Node Harp DeER 𝑃 DeER 𝐿 DeER 𝑃 DeER 𝐿</cell></row><row><cell></cell><cell>Txt</cell><cell>-S</cell><cell>-F</cell><cell>-O</cell><cell>2Vec</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>BB</cell><cell>0.59</cell><cell>0.50</cell><cell>0.82</cell><cell>0.86</cell><cell>0.86</cell><cell>0.86</cell><cell>0.51</cell><cell>0.53</cell><cell>0.54</cell><cell>0.58</cell></row><row><cell>WA</cell><cell>0.58</cell><cell>0.59</cell><cell>0.75</cell><cell>0.81</cell><cell>mem</cell><cell>0.78</cell><cell>0.58</cell><cell>0.62</cell><cell>0.62</cell><cell>0.63</cell></row><row><cell>AG</cell><cell>0.18</cell><cell>0.14</cell><cell>0.57</cell><cell>0.59</cell><cell>0.70</cell><cell>0.71</cell><cell>0.53</cell><cell>0.56</cell><cell>0.58</cell><cell>0.62</cell></row><row><cell>FZ</cell><cell>0.99</cell><cell>0.98</cell><cell>0.99</cell><cell>0.99</cell><cell>1.00</cell><cell>1.00</cell><cell>1.00</cell><cell>1.00</cell><cell>1.00</cell><cell>1.00</cell></row><row><cell>IA</cell><cell>0.10</cell><cell>0.09</cell><cell>0.09</cell><cell>0.11</cell><cell>mem</cell><cell>0.14</cell><cell>0.76</cell><cell>0.81</cell><cell>0.77</cell><cell>0.82</cell></row><row><cell>DA</cell><cell>0.72</cell><cell>0.95</cell><cell>0.94</cell><cell>0.95</cell><cell>0.87</cell><cell>0.97</cell><cell>0.84</cell><cell>0.89</cell><cell>0.86</cell><cell>0.90</cell></row><row><cell>DS</cell><cell>0.80</cell><cell>0.85</cell><cell>0.75</cell><cell>0.92</cell><cell>mem</cell><cell>0.81</cell><cell>0.80</cell><cell>0.87</cell><cell>0.82</cell><cell>0.91</cell></row><row><cell>IM</cell><cell>0.31</cell><cell>0.90</cell><cell>0.64</cell><cell>0.94</cell><cell>mem</cell><cell>0.95</cell><cell>0.82</cell><cell>0.88</cell><cell>0.84</cell><cell>0.91</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgement This work has been partially supported by the ANR grant ANR-18-CE23-0019 and by the IMT Futur &amp; Ruptures program "AutoClean".</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Data curation with deep learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Thirumuruganathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ouzzani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Doan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">EDBT</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Distributed representations of tuples for entity resolution</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ebraheem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Thirumuruganathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Joty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ouzzani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PVLDB</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="1454" to="1467" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Mudgal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Rekatsinas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Doan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Deep</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Arcaute</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Raghavendra</surname></persName>
		</author>
		<title level="m">Deep learning for entity matching: A design space exploration</title>
				<imprint>
			<publisher>SIGMOD</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</author>
		<title level="m">Auto-em: End-to-end fuzzy entity-matching using pre-trained deep models and transfer learning</title>
				<imprint>
			<publisher>WWW</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2413" to="2424" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">CLRL: feature engineering for cross-language record linkage</title>
		<author>
			<persName><forename type="first">Ö</forename><forename type="middle">Ö</forename><surname>Çakal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mahdavi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Abedjan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">EDBT</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="678" to="681" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Kasai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Qian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gurajada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Popa</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1906.08042</idno>
		<title level="m">Low-resource deep entity resolution with transfer and active learning</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Synthesizing entity matching rules by examples</title>
		<author>
			<persName><forename type="first">R</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Meduri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Elmagarmid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Madden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Papotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Quiané-Ruiz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Solar-Lezama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proc. VLDB Endow</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="189" to="202" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Seeping semantics: Linking datasets using word embeddings for data discovery</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Fernandez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Mansour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Qahtan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Elmagarmid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Ilyas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Madden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ouzzani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stonebraker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>ICDE</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Koutras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fragkoulis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Katsifodimos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lofi</surname></persName>
		</author>
		<title level="m">Rema: Graph embeddings-based relational schema matching, SEA Data workshop</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Fernandez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Madden</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1903.05008</idno>
		<title level="m">Termite: a system for tunneling through heterogeneous data</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Creating embeddings of heterogeneous relational datasets for data integration tasks</title>
		<author>
			<persName><forename type="first">R</forename><surname>Cappuzzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Papotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Thirumuruganathan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SIGMOD, ACM</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Word representations: a simple and general method for semisupervised learning</title>
		<author>
			<persName><forename type="first">J</forename><surname>Turian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ratinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ACL, ACL</title>
				<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="384" to="394" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">node2vec: Scalable feature learning for networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Grover</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Leskovec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SIGKDD, ACM</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="855" to="864" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Retrolive: Analysis of relational retrofitted word embeddings</title>
		<author>
			<persName><forename type="first">M</forename><surname>Günther</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Thiele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Nikulski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Lehner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">EDBT</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Gokhale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Doan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Naughton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Rampalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Shavlik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhu</surname></persName>
		</author>
		<title level="m">Corleone: hands-off crowdsourcing for entity matching</title>
				<imprint>
			<publisher>SIGMOD</publisher>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S G C</forename></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Doan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Naughton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Deep</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Arcaute</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Raghavendra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Park</surname></persName>
		</author>
		<title level="m">Falcon: Scaling up hands-off crowdsourced entity matching to build cloud services</title>
				<imprint>
			<publisher>SIGMOD</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Enriching word vectors with subword information</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bojanowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">TACL</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="135" to="146" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">HARP: hierarchical representation learning for networks</title>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Perozzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Skiena</surname></persName>
		</author>
		<idno>CoRR abs/1706.07845</idno>
		<ptr target="http://arxiv.org/abs/1706.07845.arXiv:1706.07845" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
