<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Statistical Analyses of Named Entity Disambiguation Benchmarks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nadine</forename><surname>Steinmetz</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hasso Plattner Institute for Software Systems Engineering</orgName>
								<address>
									<settlement>Potsdam</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Magnus</forename><surname>Knuth</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hasso Plattner Institute for Software Systems Engineering</orgName>
								<address>
									<settlement>Potsdam</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Harald</forename><surname>Sack</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Hasso Plattner Institute for Software Systems Engineering</orgName>
								<address>
									<settlement>Potsdam</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Statistical Analyses of Named Entity Disambiguation Benchmarks</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C61AA4F54F3CD6974225CD7406186C2B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T04:02+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>named entity disambiguation, benchmark evaluation</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In the last years, various tools for automatic semantic annotation of textual information have emerged. The main challenge of all approaches is to solve ambiguity of natural language and assign unique semantic entities according to the present context. To compare the different approaches a ground truth namely an annotated benchmark is essential. But, besides the actual disambiguation approach the achieved evaluation results are also dependent on the characteristics of the benchmark dataset and the expressiveness of the dictionary applied to determine entity candidates. This paper presents statistical analyses and mapping experiments on different benchmarks and dictionaries to identify characteristics and structure of the respective datasets.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>One essential step in understanding textual information is the identification of semantic concepts within natural language texts. Therefore multiple Named Entity Recognition systems have been developed and become integrated in content management and information retrieval systems to handle the flood of information.</p><p>We have to distinguish between Named Entity Recognition (NER) systems that refer to finding meaningful entities within a given natural language text that are of a specific predetermined type (as e. g., persons, locations, or organizations) and Named Entity Disambiguation (NED) systems (sometimes also referred to as Named Entity Mapping or Named Entity Linking) that take the NER process one step further by interpreting named entities to assign a unique meaning (entity) to a sequence of terms. In order to achieve this, first all potential entity candidates for a phrase have to be determined with the help of a dictionary. The number of potential entity candidates corresponds to the level of ambiguity of the underlying text phrase. Taking into account the context of the phrase, as e. g. the sentence where the phrase occurs, a unique entity is selected according to the meaning of the phrase in a subsequent disambiguation step.</p><p>Multiple efforts compete in this discipline. But, the comparison of different NED systems is difficult, especially if they don't use a common dictionary for entity candidate determination. Therefore, it is highly desirable to provide common benchmarks for evaluation. On the other hand, benchmarks are applied to tune a NED system for its intended purpose and/or a specific domain, i. e. context and pragmatics of the NED system are fixed to a specific task. To achieve this multiple benchmark datasets have been created to evaluate such systems. To evaluate a NED system and to compare its performance against already existing solutions the system's developer should be aware of the characteristics of the available benchmarks.</p><p>In this paper, prominent datasets -dictionary datasets as well as benchmark datasets -are analyzed to gain better insights about both their characteristics and on their capabilities while considering also potential drawbacks. The datasets are statistically analyzed for mapping coverage, level of ambiguity, maximum achievable recall, as well as difficulty. All benchmarks and evaluation results are available online to achieve more target-oriented evaluations of NER and NED systems.</p><p>The paper is organized as follows: Section 2 gives an overview on NED tools and comparison approaches and introduces the benchmarks and dictionaries utilized in this paper. Statistical information about the benchmarks are presented in Section 3. Experiments using four different dictionaries on three different benchmarks are described and discussed in Section 4. Section 5 concludes the paper and summarizes the scientific contribution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>Semantic annotation of textual information in web documents has become a key technology for data mining and information retrieval and a key itself towards the Semantic Web. Several tools for automatic semantic annotation have emerged for this task and created a strong demand for evaluation benchmarks to enable comparison. Therefore, a number of benchmarks containing natural language texts annotated with semantic entities have been created. Cornolti et al. present a benchmarking framework for entity-annotation tools and also compare the performances of various systems <ref type="bibr" target="#b2">[3]</ref>. This evaluation indicates a difference between several applied datasets, but does not analyze their causes in further detail. Gangemi describes an approach of comparing different annotation tools without the application of a benchmark <ref type="bibr" target="#b4">[5]</ref>. The baseline for the evaluation is defined by the maximum agreement of all evaluated automatic semantic annotation tools. Unfortunately, such a baseline does not take into account different semantic annotation levels in terms of the special purposes the evaluated tools have been developed for.</p><p>DBpedia Spotlight is an established NED application that applies an analytical approach for the disambiguation process. Every entity candidate of a surface form found in the text is represented by a vector composed of all terms that co-occurred within the same paragraphs of the Wikipedia articles where this entity is linked <ref type="bibr" target="#b8">[9]</ref>. The approach has been evaluated on a benchmark containing ten semantically annotated New York Times articles. This benchmark is described in Section 3.1 and part of the presented experiments. DBpedia Spotlight applies a Wikipedia based dictionary -a Lexicalization dataset -to determine potential entity candidates. This dataset is also part of the presented experiments and described in the next section.</p><p>AIDA is an online tool for disambiguation of named entities in natural language text and tables <ref type="bibr" target="#b11">[12]</ref>. It utilizes relationships between named entities for the disambiguation. AIDA applies a dictionary called AIDA Means to determine potential entity candidates. This dictionary is further described in the next section and also under observation for the presented experiments described in Section 4. AIDA has been evaluated on a benchmark created from the CoNLL 2003 dataset 1 . Since this dataset is not available for free, KORE 50 -a subset of the AIDA benchmark dataset -has been used for the experiments in this paper which is described in Section 3.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Benchmark Dataset Evaluation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Benchmark Datasets</head><p>The benchmark datasets under consideration contain annotated texts linking enclosed lexemes to entities. Based on these benchmarks the performance of NED systems can be evaluated. Within this work, we restrict our selection of benchmark datasets to those containing (a) english language texts (b) originating from authentic documents (e. g. newswire), (c) containing annotations to DBpedia entities or Wikipedia articles, and (d) involving context at least on sentence level.</p><p>The DBpedia Spotlight dataset <ref type="bibr" target="#b8">[9]</ref> has been created for the eponymous NED tool. It contains 60 natural language sentences from ten different New York Times articles with overall 249 annotated DBpedia entities, i. e. the entities are not explicitely bound to mentions within the texts, which causes a certain lack of clarity. Therefore, we (in all conscience) retroactively have allocated the entities to their positions within the texts. The entities dbp:Markup_Language and dbp:PBC_CSKA_Moscow could not be linked in the texts, since there was also a more specific entity enlisted occupying their solely possible location, e. g. hypertext markup language has been annotated with dbp:HTML rather than dbp:Markup_language.</p><p>KORE 50 (AIDA) <ref type="bibr" target="#b6">[7]</ref> is a subset of the larger AIDA corpus <ref type="bibr" target="#b7">[8]</ref>, which is based on the dataset of the CoNLL 2003 NER task. The dataset aims to capture hard to disambiguate mentions of entities and it contains a large number of first names referring to persons, whose identity needs to be deduced from the given context. It comprises 50 sentences from different domains, such as music, celebrities, and business and is provided in a clear TSV format.</p><p>The Wikilinks Corpus <ref type="bibr" target="#b9">[10]</ref> has been introduced recently by Google. The corpus collects hyperlinks to Wikipedia gathered from over 3 million web sites. It has been transformed to RDF using the NLP Interchange Format (NIF) by Hellmann et al. <ref type="bibr" target="#b5">[6]</ref>. The corpus is divided in 68 RDF dump files, from which the first one<ref type="foot" target="#foot_1">2</ref> has been used for Lexicalization Statistics (cf. Section 4). The intention behind links to Wikipedia articles needs to be considered in a different way compared to the intention of the other two datasets, since links have been created rather for informational reasons. For each annotation the original website is named, which allows to recover the full document contexts for the annotations, though they are not contained in the NIF resource so far. This benchmark cannot be considered as a gold standard. In some cases mentions are linked to broken URLs, redirects or semantically wrong entities. This issue is also discussed in Section 4.</p><p>For further processing NIF representations of KORE 50 and DBpedia Spotlight have been created, which are accessible at our website<ref type="foot" target="#foot_2">3</ref> . Further datasets not considered in this paper are e. g. the complete AIDA/CoNLL corpus <ref type="bibr" target="#b7">[8]</ref>, the WePS (Web people search) evaluation dataset <ref type="bibr" target="#b0">[1]</ref>, the cross-document Italian people coreference (CRIPCO) corpus <ref type="bibr" target="#b1">[2]</ref>, and the corpus for cross-document coreference by Day et al. <ref type="bibr" target="#b3">[4]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Benchmark Statistics</head><p>The three benchmark datasets under consideration cover different domains, e. g. though all datasets originate from authentic corporas varying portions have been selected and different types of entities have been annotated. Table <ref type="table" target="#tab_0">1</ref> shows the distribution of DBpedia types within the benchmark dataset. About 10% of the annotated entities in the DBpedia Spotlight dataset are locations and majority of about 80% of the annotated entities are not associated with any type information in DBpedia. Since the DBpedia Spotlight dataset originates from New York Times articles, the annotations are embedded in document contexts. The KORE 50 dataset contains 144 annotations which mostly refer to agents (74 times dbo:Person and 28 times dbo:Organisation). Only a relatively small amount (18.5%) of annotated entities does not provide any type information in DBpedia. The context for the annotated entities in the KORE 50 dataset is limited to (relatively short) sentences.</p><p>The by far largest dataset is Wikilinks. Its sheer size allows to extract sub-benchmarks for specific designated domains, e. g. there are about 281,000 mentions of 8,594 different diseases. However, a large amount (66%) of the annotated entities does not provide any type information in DBpedia and the largest amount of the typed entities refer to an agent (18.9%).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Lexicalization Statistics and Discussion</head><p>The benchmarks described in Section 3.1 are constructed to evaluate NED algorithms. The evaluation results of a NED method are not only dependent on the actual algorithm used to disambiguate ambiguous mentions but also on the structure of the benchmark and the underlying dictionary utilized to determine entity candidates for a mention. A mention mapping or mapped mention refers to a mention of a benchmark that is assigned to one or more entity candidates of the used dictionary. The following section introduces several dictionaries.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Dictionary Datasets</head><p>Dictionaries contain associations that map strings (surface forms) to entities represented by Wikipedia articles or DBpedia concepts. Typically, dictionaries are applied by NED systems in an early step to find candidates for lexemes in natural language texts. In a further (disambiguation) step the actual correct entity has to be selected from all these candidates.</p><p>The DBpedia Lexicalizations dataset <ref type="bibr" target="#b8">[9]</ref> has been extracted from Wikipedia interwiki links. It contains anchor texts, the so called surface form, with their respective destination article. Overall, there are 2 million entries in the DBpedia Lexicalizations dataset. For each combination the conditional probabilities P (uri |surfaceform) <ref type="foot" target="#foot_3">4</ref> , P (surfaceform|uri ), and the pointwise mutual information value (PMI) are given. Subsequently, this dictionary is referred to as DBL (DB pedia Lexicalizations).</p><p>Google has released a similar, but far larger dataset: Crosswiki <ref type="bibr" target="#b10">[11]</ref>. The Crosswiki dictionary has been build at webscale and includes 378 million entries. This dictionary is subsequently referred to as GCW. Similar to the DBL dataset the probability P (uri |surfaceform) has been calculated and is available in the dictionary. This probability is used for the experiments described in Section 4.2.</p><p>The AIDA Means dictionary is an extended version of the YAGO2<ref type="foot" target="#foot_4">5</ref> means relation. The YAGO means relation is harvested from disambiguations pages, redirects, and links in Wikipedia <ref type="bibr" target="#b11">[12]</ref>. Unfortunately, there is no information given what the extension includes exactly. The AIDA Means dictionary contains ∼18 million entries. Subsequently, this dictionary is referred to as AIDA.</p><p>In addition to the three already existing dictionaries described above, we have constructed an own dictionary. Similar to the YAGO means relation this dictionary has been constructed by solving disambiguation pages and redirects and using these alternative labels additionally to the original labels of the DBpedia entities. Except the elimination of bracket terms (e. g. the label Berlin (2009 film) is converted to Berlin by removing the brackets and the term within them) no further preprocessing has been performed on this dictionary. Thus, all labels are presented in original case sensitivity. Further evaluation on this issue is described in Section 4.3. This dictionary is subsequently referred to as RDM (Redirect Disambiguation M apping).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Experiments</head><p>To identify several characteristics of the introduced dictionaries as well as consolidate assumptions about the structure of the benchmarks the experiments described in the following sections have been conducted. For performance issues only a subset of the Wikilinks benchmark has been used for the following experiments. For the subset the first dump file containing 494,512 annotations and 192,008 distinct mentions and assigned entities has been used.</p><p>Mapping Coverage First, the coverage of mention mappings is calculated. All annotated entity mentions from the benchmarks are looked up in the four different dictionaries. If at least one entity candidate for the mention is found in the dictionary a counter is increased. This measure is an indicator for the expressiveness and versatility of the dictionary.</p><p>Entity Candidate Count For all mapped mentions the number of entity candidates found in the respective dictionary is added up. The number of entity candidates corresponds to the level of ambiguity of the mention and can be considered as an indicator for the level of difficulty of the subsequent disambiguation process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Maximum Recall</head><p>The list of entity candidates for all mapped mentions is looked up whether the annotated entity (from the benchmark) is included. Only if it is contained in the list, a correct disambiguation is achievable at all. Thus, this measure predicts the maximum achievable recall using the respective dictionary on the benchmark.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Recall and Precision achieved by Popularity</head><p>For Word Sense Disambiguation (WSD) after determining entity candidates for the mentions a subsequent disambiguation process tries to detect the most relevant entity of all candidates according to the given context. For this experiment the disambiguation process is simplified: the most popular entity among the available candidates is chosen as correct disambiguation. To determine the popularity of the entity candidates three different measures are applied:</p><p>-Incoming Page Links of entity candidates -Anchor-Link-Probability within web document corpus -Anchor-Link-Probability within Wikipedia corpus</p><p>The first measure is a simple entity-based popularity measure. The popularity is defined according to the number of incoming Wikipedia page links. The more links point to an entity the more popular the entity is considered. The Anchor-Link-Probability defines the probability of a linked entity for a given anchor text. Thus, the more often a mention is used to link to the same entity the higher is the Anchor-Link-Probability. This probability has been calculated on two different corpora. For the DBL dictionary this probability has been calculated based on the Wikipedia article corpus and for GCW dataset it has been calculated based on all web documents (cf. Section 4.1). The results of this experiment can be considered as an indicator for the degree of difficulty of the applied benchmark in terms of WSD. A high recall and precision by simply using a popularity measure indicates a less difficult benchmark dataset. If a benchmark contains less popular entities the disambiguation process can be considered more difficult.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Results &amp; Discussion</head><p>The experiments described above are discussed in the following paragraphs. For every experiment a table with the achieved results is given. The tables show the results for the four different dictionaries -represented by the columns -on the three different benchmarks -represented by the rows. For comparison issues, for all dictionaries the number of entries and for all benchmarks the number of distinct mentions and their annotated entities is given. For all results the total numbers as well as proportional respectively an averaged value is given. This facilitates the comparison of benchmarks and dictionaries that are significantly differing in number of annotations and size.</p><p>The experiments mapping coverage, entity candidate count, maximum recall, and recall and precision based on page link popularity have been also performed using caseinsensitive mentions and labels in the four different dictionaries. For comparison, these results are presented in the same tables of the respective experiments as the results of the case-sensitive experiments. Recall and precision based on Anchor-Link-Probability have not been calculated as the probabilities for case-insensitive anchors are not available for the DBL and GCW datasets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mapping Coverage</head><p>-GCW achieves highest coverage (between 94.67% and 100%) due to largest dictionary containing 378 m. entries and its construction method: anchor texts and linked Wikipedia articles in web documents. -RDM performs worst with only 25.19% on the Spotlight benchmark due to the lack of preprocessing -all labels are given with capital first letters which is not common in English language except for persons, places, organizations. -Coverage for RDM increased by 69% (to 94%) when mentions in Spotlight benchmark are looked up in dictionary case-insensitive. Also, for the Wikilinks benchmark the coverage using the RDM dictionary is increased by 16% to 76%. The RDM dictionary consists of mainly case-sensitive labels (as no pre-processing has been performed). Persons, organizations, and places are written with a first capital letter in English language texts. Mentions of entities of those types are found in a case-sensitive dictionary, such as RDM. In contrast, mentions of entities that are not of type person, organization or place, as e. g. internet are not found in the dictionary. If a benchmark contains mainly mentions of entities of type person, organization, or place the RDM dictionary achieves a high mapping coverage -as for the KORE 50 benchmark. Case-insensitive selection must increase the coverage, especially if the benchmark contains entity mentions that are not of type person, organization or place. This assumption is consolidated by the increased mapping coverage for the Spotlight and Wikilinks benchmark and the type information of the mentioned entities in the benchmarks presented in Table <ref type="table" target="#tab_0">1</ref>. -Overall, the dictionaries perform very well or even best on the benchmarks that have been constructed for the evaluation of their respective applications: DBL -Spotlight, AIDA -KORE 50, and GCW -Wikilinks.</p><p>The overall results are depicted in Table <ref type="table" target="#tab_1">2</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Entity Candidate Count</head><p>-KORE 50 benchmark is intended to contain mentions that are hard to disambiguate -overall, all dictionaries achieve highest entity count for this benchmark. -For the Wikilinks benchmark all dictionaries achieve low entity candidate count which shows that real world annotations seem not too hard to disambiguate. -AIDA dictionary assigns most entity candidates on KORE 50 benchmark as the dictionary is constructed for evaluation on that benchmark and is supposedly enlarged by labels especially for that purpose. -KORE 50 contains many persons that are mentioned by their first name only. This results in a large number of entity candidates. -Wikilinks benchmark is annotated very sparsely and only assumed 'important' entities are linked.</p><p>Overall results are shown in Table <ref type="table" target="#tab_2">3</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Maximum Recall</head><p>-DBL and RDM do not contain all first names of persons as needed for benchmark KORE 50. Thus, the maximum recall decreases compared to mapping coverage. -AIDA performs poorly on Spotlight benchmark due to the structure of dictionary.</p><p>The dictionary contains a large number of persons' first names. Apparently, the dictionary does not reflect labels for entities in manually annotated texts. -For RDM dictionary the maximum recall increases by 10% respectively 63% for the two benchmarks Wikilinks and Spotlight, if mentions are looked up case-insensitive. This is a reflection of the structure of the benchmarks and the increased coverage of mapped mentions. -For the Wikilinks benchmark the maximum achievable recall is low compared to the other two benchmarks. This results from the fact that this benchmark cannot be considered as a gold standard (cf. Section 3.1). If a mention is annotated with a wrong entity there is a high probability that this entity is not contained in the lists of entity candidates.</p><p>Overall results are shown in Table <ref type="table" target="#tab_3">4</ref>. Recall and Precision achieved by Popularity -Incoming Wikipedia Page Links of Entity Candidates -Notably GCW performs poorly on all benchmarks compared to maximum achievable recall due to a high entity candidate count. Apparently entity candidate lists often contain more popular but incorrect entities. -In the KORE 50 benchmark, due to many annotated first names, entity candidate lists contain many prospective entities and apparently the correct candidate is often not the most popular one compared to the other candidates. This explains the poor performance of all dictionaries on the KORE 50 using page link popularity. -Compared to the maximum achievable recall (of all dictionaries) on the KORE 50 the achieved recall is very low using a popularity measure as simplified disambiguation process. This confirms the intention of the benchmark to contain mentions that are hard to disambiguate.</p><p>Overall results are shown in Table <ref type="table">5</ref>.</p><p>Table <ref type="table">5</ref>. Recall and Precision, if most popular entity -based on incoming Wikipedia page links -is mapped to mention objective of this paper is to point out the differences of several benchmarks and dictionaries for NED. For this purpose three different benchmarks have been analyzed. Two of them first have been converted into NIF representations and made available online. The analyses included simple statistical information as well as type information of contained entities about the benchmarks. Additionally, four different dictionaries have been applied to determine entity candidates in the benchmarks. Based on our evaluation, important assumptions about the benchmarks have been consolidated and new insights into the characteristics of evaluated benchmarks as well as on the expressiveness of the dictionaries have been delivered. By making all benchmarks and evaluation results available online, evaluation of new NER or NED tools can be achieved more target-oriented with more meaningful results.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Distribution of DBpedia types in Benchmark Datasets</figDesc><table><row><cell>Class</cell><cell cols="2">Spotlight</cell><cell cols="2">KORE 50</cell><cell cols="2">Wikilinks</cell></row><row><cell></cell><cell cols="6">entities mentions entities mentions entities mentions</cell></row><row><cell>total</cell><cell>249</cell><cell>331</cell><cell>130</cell><cell cols="3">144 2,228,049 30,791,380</cell></row><row><cell>untyped</cell><cell>79.9%</cell><cell cols="2">80.1% 18.5%</cell><cell>17.4%</cell><cell>66.5%</cell><cell>60.7%</cell></row><row><cell>Activity</cell><cell>&lt;1%</cell><cell>&lt;1%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>-Sport</cell><cell>&lt;1%</cell><cell>&lt;1%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>Agent</cell><cell>2.4%</cell><cell cols="2">2.7% 66.9%</cell><cell>70.8%</cell><cell>18.9%</cell><cell>18.7%</cell></row><row><cell>-Organisation</cell><cell>&lt;1%</cell><cell cols="2">&lt;1% 18.5%</cell><cell>19.4%</cell><cell>5.3%</cell><cell>5.8%</cell></row><row><cell>--Company</cell><cell>&lt;1%</cell><cell cols="2">&lt;1% 9.2%</cell><cell>9.7%</cell><cell>1.8%</cell><cell>1.8%</cell></row><row><cell>--SportsTeam</cell><cell>-</cell><cell cols="2">-7.7%</cell><cell>6.9%</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>---SoccerClub</cell><cell>-</cell><cell cols="2">-7.7%</cell><cell>6.9%</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>-Person</cell><cell>2.0%</cell><cell cols="2">2.4% 48.5%</cell><cell>51.4%</cell><cell>13.6%</cell><cell>12.9%</cell></row><row><cell>--Artist</cell><cell>-</cell><cell cols="2">-17.7%</cell><cell>18.8%</cell><cell>3.4%</cell><cell>3.5%</cell></row><row><cell>---MusicalArtist</cell><cell>-</cell><cell cols="2">-17.7%</cell><cell>18.8%</cell><cell>1.8%</cell><cell>1.7%</cell></row><row><cell>--Athlete</cell><cell>-</cell><cell cols="2">-6.9%</cell><cell>8.3%</cell><cell>1.2%</cell><cell>&lt;1%</cell></row><row><cell>---SoccerPlayer</cell><cell>-</cell><cell cols="2">-5.4%</cell><cell>6.3%</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>--Officeholder</cell><cell>&lt;1%</cell><cell cols="2">&lt;1% 4.6%</cell><cell>4.2%</cell><cell>1.1%</cell><cell>1.2%</cell></row><row><cell>Colour</cell><cell>1.6%</cell><cell>1.5%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>Disease</cell><cell>1.6%</cell><cell>1.2%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>EthnicGroup</cell><cell>1.2%</cell><cell>1.8%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>Event</cell><cell>1.2%</cell><cell>&lt;1%</cell><cell>-</cell><cell>-</cell><cell>1.0%</cell><cell>1.5%</cell></row><row><cell>Place</cell><cell>10.4%</cell><cell cols="2">10.0% 10.8%</cell><cell>10.4%</cell><cell>9.6%</cell><cell>12.2%</cell></row><row><cell cols="2">-ArchitecturalStructure 2.0%</cell><cell cols="2">1.5% 3.1%</cell><cell>2.8%</cell><cell>1.8%</cell><cell>1.6%</cell></row><row><cell>--Infrastructure</cell><cell>1.6%</cell><cell cols="2">1.2% &lt;1%</cell><cell>&lt;1%</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>-PopulatedPlace</cell><cell>7.2%</cell><cell cols="2">7.6% 5.4%</cell><cell>5.5%</cell><cell>5.1%</cell><cell>8.0%</cell></row><row><cell>--Country</cell><cell>3.6%</cell><cell>3.3%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>2.7%</cell></row><row><cell>--Region</cell><cell>&lt;1%</cell><cell>&lt;1%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>1.0%</cell></row><row><cell>--Settlement</cell><cell>2.4%</cell><cell cols="2">3.3% 3.8%</cell><cell>3.5%</cell><cell>3.8%</cell><cell>4.1%</cell></row><row><cell>---City</cell><cell>1.6%</cell><cell cols="2">2.1% 2.3%</cell><cell>2.1%</cell><cell>&lt;1%</cell><cell>1.3%</cell></row><row><cell>Work</cell><cell>&lt;1%</cell><cell cols="2">&lt;1% 6.2%</cell><cell>6.3%</cell><cell>6.9%</cell><cell>7.3%</cell></row><row><cell>-Film</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>-</cell><cell>1.9%</cell><cell>1.5%</cell></row><row><cell>-MusicalWork</cell><cell>&lt;1%</cell><cell cols="2">&lt;1% 3.1%</cell><cell>3.5%</cell><cell>1.2%</cell><cell>&lt;1%</cell></row><row><cell>--Album</cell><cell>&lt;1%</cell><cell cols="2">&lt;1% 3.1%</cell><cell>3.5%</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row><row><cell>Year</cell><cell>&lt;1%</cell><cell>&lt;1%</cell><cell>-</cell><cell>-</cell><cell>&lt;1%</cell><cell>&lt;1%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Coverage of mentions that are mapped to one or more entities -total count and percentage</figDesc><table><row><cell>H</cell><cell cols="2">H</cell><cell cols="2">Dic</cell><cell>DBL</cell><cell>RDM</cell><cell>AIDA</cell><cell>GCW</cell><cell>Mention</cell></row><row><cell cols="2">BM</cell><cell cols="2">H</cell><cell>H H</cell><cell>2M entries</cell><cell>10M entries</cell><cell cols="2">18M entries 378M entries</cell><cell>Count</cell></row><row><cell cols="5">Spotlight</cell><cell>235 89%</cell><cell>65 25%</cell><cell>227 86%</cell><cell>258 97%</cell><cell>265</cell></row><row><cell cols="5">KORE 50</cell><cell>117 90%</cell><cell>129 99%</cell><cell>128 98%</cell><cell>130 100%</cell><cell>130</cell></row><row><cell cols="9">Wikilinks 107,669 56% 114,443 60% 115,646 60% 170,765 89% 192,008</cell></row><row><cell cols="9">Experiment with case-insensitive mentions and dictionary labels</cell></row><row><cell cols="5">Spotlight</cell><cell>241 91%</cell><cell>249 94%</cell><cell>235 89%</cell><cell>258 97%</cell><cell>265</cell></row><row><cell cols="5">KORE 50</cell><cell>121 93%</cell><cell>130 100%</cell><cell>130 100%</cell><cell>130 100%</cell><cell>130</cell></row><row><cell cols="9">Wikilinks 114,278 60% 145,241 76% 128,139 67% 171,941 90% 192,008</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3 .</head><label>3</label><figDesc>Amount of entity candidates for all mapped mentions -overall and averaged per mapped mention</figDesc><table><row><cell>H</cell><cell>H</cell><cell>Dic</cell><cell>DBL</cell><cell></cell><cell>RDM</cell><cell></cell><cell>AIDA</cell><cell>GCW</cell><cell>Mention</cell></row><row><cell cols="2">BM</cell><cell>H H H</cell><cell cols="2">2M entries</cell><cell cols="2">10M entries</cell><cell cols="2">18M entries</cell><cell>378M entries</cell><cell>Count</cell></row><row><cell cols="3">Spotlight</cell><cell>1,849</cell><cell cols="5">7.9 1,024 15.8 6,487 28.6 134,493 521.3</cell><cell>265</cell></row><row><cell cols="3">KORE 50</cell><cell cols="6">2,980 25.5 16,936 131.3 74,967 585.7 36,772 282.9</cell><cell>130</cell></row><row><cell cols="3">Wikilinks</cell><cell>188,748</cell><cell cols="2">1.8 244,977</cell><cell>2.1</cell><cell></cell><cell>2.6 1,346,446</cell><cell>7.9 192,008</cell></row><row><cell cols="9">Experiment with case-insensitive mentions and dictionary labels</cell></row><row><cell cols="3">Spotlight</cell><cell cols="6">3,400 14.1 6,508 26.1 13,336 56.7 367,698 1425.2</cell><cell>265</cell></row><row><cell cols="3">KORE 50</cell><cell cols="6">3,079 25.4 16,946 130.4 75,326 579.4 46,244 355.7</cell><cell>130</cell></row><row><cell cols="3">Wikilinks</cell><cell>207,181</cell><cell cols="2">1.8 145,241</cell><cell cols="2">2.1 352,107</cell><cell>2.7 1.8 m. 10.6 192,008</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4 .</head><label>4</label><figDesc>Maximum achievable recall -coverage of annotated entities (in the benchmark) for mentions contained in the list of candidates</figDesc><table><row><cell>H</cell><cell>H</cell><cell>Dic</cell><cell>DBL</cell><cell>RDM</cell><cell>AIDA</cell><cell>GCW</cell><cell>Mention</cell></row><row><cell cols="2">BM</cell><cell>H H H</cell><cell>2M entries</cell><cell>10M entries</cell><cell>18M entries</cell><cell>378M entries</cell><cell>Count</cell></row><row><cell cols="3">Spotlight</cell><cell>223 84%</cell><cell>60 23%</cell><cell>63 24%</cell><cell>241 91%</cell><cell>265</cell></row><row><cell cols="3">KORE 50</cell><cell>87 67%</cell><cell>93 72%</cell><cell>112 86%</cell><cell>110 85%</cell><cell>130</cell></row><row><cell cols="3">Wikilinks</cell><cell cols="5">82,338 43% 86,555 45% 82,565 43% 129,449 67% 192,008</cell></row><row><cell cols="7">Experiment with case-insensitive mentions and dictionary labels</cell><cell></cell></row><row><cell cols="3">Spotlight</cell><cell>224 85%</cell><cell>228 86%</cell><cell>75 28%</cell><cell>242 91%</cell><cell>265</cell></row><row><cell cols="3">KORE 50</cell><cell>89 68%</cell><cell>93 72%</cell><cell>112 86%</cell><cell>110 85%</cell><cell>130</cell></row><row><cell cols="3">Wikilinks</cell><cell cols="5">86,955 45% 106,713 56% 92,824 48% 130,335 68% 192,008</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://www.cnts.ua.ac.be/conll2003/ner/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">It can be assumed that the slices are homogeneously mixed.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">http://www.yovisto.com/labs/ner-benchmarks/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">The measure is used later on for the experiments as Anchor-Link-Probability (cf. Section 4)</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">http://www.yago-knowledge.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_5">ConclusionEvaluation results of NED approaches are dependent on the structure of the used benchmark dataset as well as on the dictionary used for entity candidate determination. The</note>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recall and Precision achieved by Popularity -Anchor-Link-Probability in web document corpus -In general, this popularity based on mention and mapped entity performs better than popularity only based on the entities' incoming Wikipedia page links. -Especially, the recall of GCW dictionary is increased between 13% and 55%. The increase of the recall for the RDM and AIDA dictionaries are not significantly compared to page link popularity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Recall and Precision achieved by Popularity -Anchor-Link-Probability in Wikipedia corpus</head><p>-For the Spotlight and Wikilinks benchmarks this popularity measure achieves higher recall and precision than the popularity measure provided by GCW dictionary. Probably this results from the fact that the Wikipedia corpus is composed by experienced authors and linked texts are well considered.</p><p>Overall results are shown in Table <ref type="table">7</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>General Findings</head><p>-For a simplified disambiguation process the Anchor-Link-Popularity performs better than Page-Link-Popularity. Anchor-Link-Popularity calculated on the Wikipedia corpus performs better than the measure calculated on the web document corpus. -Dictionaries perform best on the benchmark constructed for the evaluation of the dictionaries' applications. -Compared to the maximum achievable recall (of all dictionaries) on the KORE 50 benchmark the achieved recall is very low using a popularity measure as simplified disambiguation process. This confirms the intention of the benchmark to contain mentions that are hard to disambiguate. -DBL performs very good over all benchmarks, especially using its popularity measure. Taking into account its size (2.2 m. entries) compared to GCW dictionary (378 m. entries) this is a surprising discovery. -The DBL popularity measure has been calculated based on the linked Wikipedia articles within the Wikipedia article corpus. Most of the Wikipedia articles have been composed by experienced authors who know how to write and distribute links within the corpus. This could be an explanation why the Wikipedia based Anchor-Link-Probability performs better than the popularity based on web documents.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">WePS-3 evaluation campaign: Overview of the web people search clustering and attribute extraction tasks</title>
		<author>
			<persName><forename type="first">J</forename><surname>Artiles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borthwick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sekine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Amigó</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF (Notebook Papers/LABs/Workshops)</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Creating a gold standard for person crossdocument coreference resolution in italian news</title>
		<author>
			<persName><forename type="first">L</forename><surname>Bentivogli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Girardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Pianta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the LREC 2008 Workshop on Resources and Evaluation for Identity Matching, Entity Resolution and Entity Management, page 19</title>
				<meeting>of the LREC 2008 Workshop on Resources and Evaluation for Identity Matching, Entity Resolution and Entity Management, page 19<address><addrLine>Marrakech, Morocco</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2008-05">May 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A framework for benchmarking entityannotation systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Cornolti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ferragina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ciaramita</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd international conference on World Wide Web, WWW &apos;13</title>
				<meeting>the 22nd international conference on World Wide Web, WWW &apos;13<address><addrLine>Geneva, Switzerland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="249" to="260" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A corpus for cross-document co-reference</title>
		<author>
			<persName><forename type="first">D</forename><surname>Day</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hitzeman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Wick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Crouch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Poesio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the LREC 2008 Workshop on Resources and Evaluation for Identity Matching, Entity Resolution and Entity Management</title>
				<meeting>of the LREC 2008 Workshop on Resources and Evaluation for Identity Matching, Entity Resolution and Entity Management</meeting>
		<imprint>
			<date type="published" when="2008-05">May 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A comparison of knowledge extraction tools for the semantic web</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gangemi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: Semantics and Big Data</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">7882</biblScope>
			<biblScope unit="page" from="351" to="366" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Integrating NLP using linked data</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hellmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Auer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brümmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of 12th Int. Semantic Web Conf</title>
				<meeting>of 12th Int. Semantic Web Conf<address><addrLine>Sydney, Australia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013-10">October 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">KORE: Keyphrase overlap relatedness for entity disambiguation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hoffart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Seufert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Theobald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 21st ACM international conference on Information and knowledge management</title>
				<meeting>of the 21st ACM international conference on Information and knowledge management</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="545" to="554" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Robust disambiguation of named entities in text</title>
		<author>
			<persName><forename type="first">J</forename><surname>Hoffart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Yosef</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Bordino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fürstenau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pinkal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Spaniol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Taneva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Thater</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the Conf. on Empirical Methods in Natural Language Processing, EMNLP &apos;11</title>
				<meeting>of the Conf. on Empirical Methods in Natural Language essing, EMNLP &apos;11<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="782" to="792" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">DBpedia Spotlight: shedding light on the web of documents</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">N</forename><surname>Mendes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jakob</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>García-Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bizer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the 7th Int. Conf. on Semantic Systems (I-Semantics)</title>
				<meeting>of the 7th Int. Conf. on Semantic Systems (I-Semantics)</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Wikilinks: A large-scale crossdocument coreference corpus labeled via links to Wikipedia</title>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Subramanya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mccallum</surname></persName>
		</author>
		<idno>UM-CS- 2012-015</idno>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
		<respStmt>
			<orgName>University of Massachusetts Amherst</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A cross-lingual dictionary for english Wikipedia concepts</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Spitkovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">X</forename><surname>Chang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the Eight Int. Conf. on Language Resources and Evaluation (LREC&apos;12)</title>
				<meeting>of the Eight Int. Conf. on Language Resources and Evaluation (LREC&apos;12)<address><addrLine>Istanbul, Turkey</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012-05">May 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">AIDA: an online tool for accurate disambiguation of named entities in text and tables</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Yosef</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hoffart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Bordino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Spaniol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PVLDB</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="1450" to="1453" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
