=Paper=
{{Paper
|id=Vol-1581/paper11
|storemode=property
|title=None
|pdfUrl=https://ceur-ws.org/Vol-1581/paper11.pdf
|volume=Vol-1581
}}
==None==
Evaluating Entity Linking: An Analysis of
Current Benchmark Datasets and a Roadmap
for Doing a Better Job
Filip Ilievski1 Pablo Mendes2 , Heiko Paulheim3 , Julien Plu4 , Giuseppe Rizzo4 ,
Felix Tristam5 , Marieke van Erp,1 , and Jörg Waitelonis6
1
VU University Amsterdam
2
IBM Research USA
3
Universtiy of Mannheim
4
EURECOM
5
CITEC, Bielefeld University
6
Hasso-Plattner-Institut, Universität Potsdam
Abstract. Entity linking has become a popular task in both natural
language processing and semantic web communities. However, we find
that the benchmark datasets for entity linking tasks do not accurately
evaluate entity linking systems. In this paper, we aim to chart strengths
and weaknesses of current benchmark datasets and sketch a roadmap for
the community to devise better benchmark datasets.
An extended abstract that followed from this discussion was submitted
to LREC 2016.
Keywords: natural language processing, semantic web, entity linking