<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Integrating Open and Closed Information Extraction: Challenges and First Steps</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Arnab Dutta</string-name>
          <email>arnab@informatik.uni-mannheim.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mathias Niepert</string-name>
          <email>mniepert@cs.washington.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Christian Meilicke</string-name>
          <email>christian@informatik.uni-mannheim.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simone Paolo Ponzetto</string-name>
          <email>simone@informatik.uni-mannheim.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science and Engineering, University of Washington</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Research Data and Web Science, University of Mannheim</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are di cult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the rst steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fulledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].</p>
      </abstract>
      <kwd-group>
        <kwd>Information extraction</kwd>
        <kwd>Entity Linking</kwd>
        <kwd>Ontologies</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Research on information extraction (IE) systems has experienced a strong
momentum in recent years. While Wikipedia-based information extraction projects
such as DBpedia [
        <xref ref-type="bibr" rid="ref1 ref17">1, 17</xref>
        ] and YAGO [
        <xref ref-type="bibr" rid="ref13 ref25">25, 13</xref>
        ] have been in development for
several years, systems such as NELL [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and ReVerb [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] that work on very large
and unstructured text corpora have more recently achieved impressive results.
The developers of the latter systems have coined the term open information
extraction (OIE), to describe information extraction systems that are not
constrained by the boundaries of encyclopedic knowledge and the corresponding
xed schemata that are, for instance, used by YAGO and DBpedia. The data
maintained by OIE systems is important for analyzing, reasoning about, and
discovering novel facts on the web and has the potential to result in a new
generation of web search engines [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. At the same time, the data of open IE projects
would bene t from a corresponding logical schema even if it was incomplete and
light-weight in nature. Hence, we believe that the problem of integrating open
and schema-driven information extraction projects is a key scienti c challenge.
In order to integrate existing IE projects we have to overcome a di cult
problem of linking di erent manifestations of the same real world object, or more
commonly the task of entity resolution. The fact that makes this task
challenging is that triples from such systems are underspeci ed and ambiguous. Let us
illustrate this point with an example triple from Nell where two terms (subject
and object) are linked by some relationship (predicate):
      </p>
      <p>agentcollaborateswithagent(royals, mlb)
In this triple, royals and mlb are two terms which are linked by some relation
agentcollaborateswithagent. Interpreting these terms is di cult since they
can have several meanings, including very infrequent and highly specialized ones,
which are sometimes di cult to interpret even for humans. Here, royals refers
to the baseball team Kansas City Royals and mlb to Major League Baseball.</p>
      <p>In general, due to the fact that information on the Web is highly
heterogeneous, there can be a fair amount of ambiguity in the extracted facts. The
problem becomes even more obvious when we encounter triples like:
bankbankincountry(royal, ireland)
Here, royal refers to a di erent real-world entity, namely the Royal Bank of
Scotland. Hence, it is important to uniquely identify the terms in accordance
with the contextual information provided by the entire triple. In this paper, we
aim at aligning such polysemous terms from open IE systems to instances from
a closed IE system, while focusing on NELL and DBpedia in particular.</p>
      <p>The remainder of the paper is organized as follows, in Section 2 we introduce
the information extraction projects relevant to our work. We present our baseline
algorithm for nding the best matching candidates for a term in Section 3 and in
Section 4 introduce a gold standard for evaluating its performance. In Section 5
we report performance results of the proposed approach. In Section 6 we discuss
related work on information extraction and entity linking. Finally, we conclude
the paper in Section 7.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Information Extraction Projects: A Brief Overview</title>
      <p>
        The Never Ending Language Learning [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] (Nell) project's objective is the
creation and maintenance of a large-scale machine learning system that
continuously learns and extracts structured information from unstructured web pages.
Its extraction algorithms operate on a large corpus of more than 500 million
web pages1 and not solely on the set of Wikipedia articles. The NELL
system was bootstrapped with a small set of classes and relations and, for each
of those, 10-15 positive and negative instances. The guiding principle of NELL
is to build several semi-supervised machine learning [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] components that
accumulate instances of the classes and relations, re-train the machine learning
algorithms with these instances as training data, and re-apply the algorithms to
      </p>
      <sec id="sec-2-1">
        <title>1 http://lemurproject.org/clueweb09/</title>
        <p>extract novel instances. This process is repeated inde nitely with each re-training
and extraction phase called an iteration. Since numerous extraction components
work in parallel and extract facts with di erent degrees of con dence in their
correctness, one of the most important aspects of Nell is its ability to combine
these di erent extraction algorithms into one coherent model. This is also
accomplished with relatively simple linear machine learning algorithms that weigh
the di erent components based on their past accuracy.</p>
        <p>Nell has been running since 2010, initially fully automated and without
any human supervision. Since it has experienced concepts drift for some of its
relations and classes, that is, an increasingly worse extraction performance over
time, Nell now is given some corrections by humans to avoid this long-term
behavior. Nell does not adhere to any of the semantic web standards such as
RDF or description logic.</p>
        <p>
          DBpedia [
          <xref ref-type="bibr" rid="ref1 ref17">1, 17</xref>
          ] is a project that aims at automatically acquiring large amounts
of structured information from Wikipedia. It extracts information from infobox
templates, categories, geo-coordinates, etc.. However, it does not learn relations
from the Wikipedia categories. This template information is mapped to an
ontology. In addition, it has a xed set of classes and relations. Moreover, the
ontology is with more than 1000 di erent relations much broader than other
existing ontologies like YAGO [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] or semantic lexicons like BabelNet [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
        <p>
          DBpedia represents its data in accordance with the best-practices of
publishing linked open data. The term linked data describes an assortment of best
practices for publishing, sharing, and connecting structured data and knowledge
over the web [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. DBpedia's relations are modeled using the resource
description framework (RDF), a generic graph-based data model for describing objects
and their relationships. The entities in DBpedia have unique URIs. This makes
it appropriate as our reference knowledge base to which we can link the terms
from Nell. In the case of the examples from Section 1, by linking the terms
appropriately to DBpedia, we are able to attach an unambiguous identi er to
them which was initially missing.
        </p>
        <p>royals ) http://dbpedia.org/resource/Kansas City Royals
royal ) http://dbpedia.org/resource/The Royal Bank of Scotland
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Methodology</title>
      <p>
        Wikipedia is an exhaustive source of unstructured data which has been
extensively used to enrich machines with knowledge [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. In this work we use Wikipedia
as an entity-tagged corpus [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] in order to bridge knowledge encoded in Nell with
DBpedia. Since there is a corresponding DBpedia entity for each Wikipedia
article [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], we can in fact formulate our disambiguation problem as that of
linking entities mentioned within Nell triples to their respective Wikipedia articles.
Our problem is that, due to polysemy, often a term from Nell can refer to
several di erent articles in Wikipedia or, analogously, instances in DBpedia. For
instance, the term jaguar can refer to several articles such as the car, the animal
and so on.
      </p>
      <p>In this work we accordingly ex- anchor Article Link count
plore the idea of using Wikipedia to jaguar Jaguar Cars 1842
nd out the most probable article for jaguar Jaguar Racing 440
a given term. Wikipedia provides reg- jaguar Jaguar 414
ular data dumps and there are o - . . . . . . . . .
the-shelf preprocessing tools to parse
those dumps. We used WikiPrep [11, lincoln Lincoln, England 1844
10] for our purpose. WikiPrep re- lincoln Lincoln, Nebraska 920
moves redundant information from lincoln Lincoln (2012 lm) 496
the original dumps and creates more . . . . . . . . .
relevant XML dumps with additional
information like the number of pages uTsainbgleth1e. Sannicphpoertjaogfutharosaenadrtliicnlceosllnin.ked to
in each category, incoming links to
each Wikipedia article and their anchor text, and a lot more2. In our work, we
are primarily interested in the link counts, namely the frequency of anchor text
labels pointing to the same Wikipedia page. Table 1 shows some of the articles
the anchors jaguar or lincoln are referring to. Intuitively, out of all the outgoing
links from the anchor term jaguar, 1842 links pointed to the article Jaguar Cars
and so on. Essentially, these anchors are analogous to the NELL terms. Based
on these counts, we create a ranked list of articles for a given anchor3.</p>
      <p>
        As seen in Table 1, the output from WikiPrep can often be a long list of
anchor-article pairs and some of them having as low as just one link count.
Accordingly, we adopt a probabilistic approach in selecting the best possible
DBpedia instance. For any given anchor in Wikipedia, the fraction of articles
the links points to is proportional to the probability that the anchor term refers
to the particular article [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. More formally, suppose some anchor e refers to N
articles A1, . . . , AN with n1, . . . , nN respective links counts, then the conditional
probability P of e referring to Aj is given by, P (Aj je) = nj =PN
i=1 ni. We compute
the probabilities for each terms we are interested in and from the ranked list of
descending P (Aj je), top-k candidates are selected. The choice of k is described
in Section 4. We apply this idea on the Nell data set. For each Nell triple, we
take the terms occurring as subject and object, and apply the procedure above.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Creating a Gold Standard</title>
      <p>
        Nell provides regular data dumps4 consisting of facts learned from the Web.
Based on this data we create a frequency distribution over the predicates. To this
end, we rst clean up the data from the dumps (since these contain additional
2 http://www.cs.technion.ac.il/~gabr/resources/code/wikiprep/
3 Note that, while there are alternative data sets such as the Crosswiki data [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], in
this work we opted instead for exploiting only Wikipedia internal-link anchors since
we expect them to provide a cleaner source of data.
4 http://rtw.ml.cmu.edu/rtw/resources
      </p>
      <p>Top predicates
generalizations</p>
      <p>proxyfor
agentcreated
subpartof
atlocation
mutualproxyfor
locationlocatedwithinlocation
athleteplayssport
citylocatedinstate
professionistypeofprofession
subpartoforganization</p>
      <p>bookwriter
furniturefoundinroom
agentcollaborateswithagent
animalistypeofanimal</p>
      <p>agentactsinlocation
teamplaysagainstteam
athleteplaysinleague</p>
      <p>worksfor
chemicalistypeofchemical</p>
      <p>Random predicates
personleads-organization
countrylocatedingeopoliticallocation
actorstarredinmovie
athleteledsportsteam</p>
      <p>personbornincity
bankbankincountry
weaponmadeincountry</p>
      <p>athletebeatathlete
companyalsoknownas
lakeinstate
information, such as, for instance, iteration of promotion, best literal strings, and
so on5, which are irrelevant to our task). In Table 2, we list the 30 most frequent
predicates. Since the gold standard should not be biased towards predicates with
many assertions we randomly sampled 12 predicates from the set of predicates
with at least 100 assertions (highlighted in bold in the table). In this paper, we
focus on this smaller set of predicates due to the time consuming nature of the
manual annotations we needed to perform. However, we plan to continuously
extend the gold standard with additional predicates in the future.</p>
      <p>For each Nell predicate we randomly sampled 100 triples. We assigned each
predicate and the corresponding list of triples to an annotator. Since we wanted
to annotate a large number of triples within an acceptable time frame, we rst
applied the method described in Section 3 to generate possible mapping
candidates for the Nell subject and object of each triple. In particular, we generated
the top-3 mappings, thereby avoiding generation of too many possible
candidates, and presented those candidates to the annotator. Note that in some cases
(see Table 3), our method could not determine a possible mapping candidate
for a Nell instance. In this case, the triple had to be annotated without
presenting a matching candidate for subject or object or both. In our setting, each
annotation instance falls under one of the following three cases:
(i) One of the mapping candidates is chosen as the correct mapping, i.e., the
simplest case.
(ii) The correct mapping is not among the presented candidates (or no
candidates have been generated). However, the annotator can nd the correct</p>
      <sec id="sec-4-1">
        <title>5 http://rtw.ml.cmu.edu/rtw/faq</title>
        <sec id="sec-4-1-1">
          <title>Nell-Subject</title>
          <p>stranger
gospel
riddle master
king john
Nell-Object
albert-camus
1st cand.
2nd cand.
3rd cand.
henry james
1st cand.
2nd cand.</p>
          <p>3rd cand.
patricia a mckillip
1st cand.
2nd cand.
3rd cand.
shakespeare
1st cand.
2nd cand.
3rd cand.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>DBP-Subject</title>
          <p>The Stranger (novel)</p>
          <p>Stranger (comics)
Stranger (Hilary Du song)
Characters of Myst</p>
          <p>?
Gospel music</p>
          <p>Gospel
Urban contemporary gospel
The Riddle-Master of Hed</p>
          <p>King John (play)
John, King of England</p>
          <p>King John (play)
King John (1899 lm)</p>
        </sec>
        <sec id="sec-4-1-3">
          <title>DBP-Object</title>
          <p>Albert Camus
Albert Camus
Henry James</p>
          <p>Henry James
Henry James (basketball)
Henry James, 1st Baron...</p>
          <p>Patricia A. McKillip
Patricia A. McKillip
William Shakespeare</p>
          <p>William Shakespeare
Shakespeare quadrangle
Shakespeare, Ontario</p>
          <p>mapping after a combined search in DBpedia, Wikipedia or other resources
available on the Web.
(iii) The annotator cannot determine a DBpedia entity to which the given Nell
instance should be mapped. This was the case when the term was too
ambiguous, underspeci ed, or not represented in DBpedia. In this case the
annotator marked the instance as unmatchable (`?').</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Experiments</title>
      <p>Evaluation Measures
In the following, we brie y re-visit the de nitions of precision and recall and
explain their application in our evaluation scenario. Let A refer to the mappings
generated by our algorithm, and G refer to mappings in the gold standard.
Precision is de ned as prec(A; G) = jA \ Gj=jAj and recall as rec(A; G) =
jA \ Gj=jGj. The F1 measure is the equally weighted harmonic mean of both
values, i.e., F1(A; G) = 2 prec(A; G) rec(A; G)=(prec(A; G) + rec(A; G)).</p>
      <p>If an annotator assigned a question mark, then the corresponding Nell term
could not be mapped and it does not appear in the gold standard G. This
can again be seen in Table 3, where we present the mappings generated by</p>
      <p>Predicate
agentcollaborateswithagent</p>
      <p>lakeinstate
personleadsorganization</p>
      <p>bookwriter
animalistypeofanimal
teamplaysagainstteam
companyasloknownas
weaponmadeincountry
actorstarredinmovie
bankbankincountry
citylocatedinstate
athleteledsportsteam
our algorithm for four triples, as well as the corresponding gold-standard
annotations. If the mapping A consists of top-k possible candidates, computing
precision and recall on the examples, we have the precision value for k = 1 as
prec@1 = 4=7 57% and rec@1 = 4=7 57%. Note that precision and recall are
not the same in general, because jAj 6= jGj in most cases. More generally, we are
interested in prec@k, the fraction of top-k candidates that are correctly mapped
and rec@k, the fraction of correct mappings that are in the top-k candidates.
For k = 3, we have prec@3 = 5=17 29% and rec@3 = 5=7 71%.</p>
      <p>It can be expected that prec@1 will have the highest score and rec@1 will
have the lowest score. When we analyze A with k &gt; 1, we focus mainly on the
increase in recall. Here we are in particular interested in the value of k for which
the number of additionally generated correct mappings in A is negligibly small
compared to the mappings generated in A for k + 1.</p>
      <p>When generating the gold standard, we realized that nding the correct
mappings is often a hard task and sometimes even di cult for a human annotator.
We had also observed that the problem of determining the gold standard varies
strongly across the properties we analyzed. For some of the properties we could
match all (or nearly all) subjects and objects in the chosen triples, while for
other properties up to 15% of the instances could not be matched.
In this case, the mapping for subject and object was annotated with a
question mark. We also observed cases in which an uncommon description was
chosen that had no counterpart in DBpedia. Some examples from the predicate
animalistypeofanimal are the labels furbearers or small mammals.
0
lla ..081 0.810.82 0.820.86 0.860.88 0.890.86 0.820.78
c
e
/R .06
n
o
iisc .40
e
rP .2
0
0
.
actor0staargreednitncmoloavbieorateswithaagneinmtalistypeofanaitmhlaelteledsportsteabmankbankincountrcyitylocatedinstate
0.8 0.81 0.820.83
5.2</p>
      <p>Results and Discussion
We run our algorithm against the gold standard6, and report the precision and
recall values. In Figure 1, we show the precision and recall values obtained on
the set of Nell predicates. These values are for top-1 matches. Precision and
recall vary across the predicates with lakeinstate having the highest precision.
Using micro-average method, for the top-1 matches we achieved a precision of
82.78% and an average recall of 81.31% across all the predicates. In the case of
macro-averaging, instead, we achieved precision of 82.61% and recall of 81.42%.</p>
      <p>In Figure 2, we show the values for rec@2, rec@5 and rec@10 compared
to rec@1, the recall values reported in Figure 1. By considering more possible
candidates with increasing k, every term gets a better chance of being matched
correctly, thus explaining the increases in rec@k with k. However, it must be
noted, that for most of the predicates the values tend to saturate after rec@5.
This re ects that after a certain k any further increase in k does not alter the
correct mappings, since our algorithm already provided a match within top-1 or
top-2 candidates. Still, for some we observe an increase even at rec@10 because
there can be still a possibility of one correct matching candidate lying at a much
lower rank in the top-k list of candidates.</p>
      <p>In Figure 3, we plot the micro-average values of the precision, recall and F1
scores over varying k. We attain the best F1 score of 0.82 for k = 1 and the
recall values tend to saturate after k = 5.</p>
      <p>This raises an important question regarding the upper bound of recall of our
algorithm. In practice, we cannot achieve a recall of 1.0 because we are limited
from factors like:
{ the matching candidate being never referred to by the terms. For example,
gs refers to the company Goldmann Sachs, but it never appeared even in all
the possible candidates, since Goldmann Sachs is never referred to with gs
in Wikipedia.
6 The data are freely available at http://web.informatik.uni-mannheim.de/data/
nell-dbpedia/NellGoldStandard.tar.
0
.
1
8
.</p>
      <p>0
l 6
la .0
c
e
R .40
2
.
0
0
a.0ctorstarraegdeinnmtcoovliaeborateswithageanntimalistypeofanimatahlleteledsportsteambankbankincountrycitylocatedinstate</p>
      <p>NELL Pbroeodkcwiocrmitaepratneysasloknpoewrsnoansleadsorganization
teamplaysagainstweeaamponmadeincountry
lakeinstate
{ persons being often referred to by the combination of their middle and last
name. For e.g. hussein obama. It is actually talking about President Barack
Obama, but with our approach we cannot nd a good match.</p>
      <p>{ misspelled words. We have entities like missle instead of missile.
However, there are ways to further improve the recall of our method like, for
instance, by means of string similarity techniques { e.g., Levenshtein edit distance.
A similarity threshold (say, as high as 95%) could then be tuned to consider
entities which only partially match a given term. Another alternative would be
to look for sub-string matches for the terms with middle and last names of
persons. For instance, hussein obama can have a possible match if terms like
barrack hussein obama has a candidate match. In addition, a similarity
threshold can be introduced in order to avoid matching by arbitrary longer terms.</p>
      <p>In general, thanks to the
annotation task and our
experiments we were able to acquire
some useful insights about the .01
data set and the proposed task. 0.89 0.89
{ Predicates with polysemous
entities, like companyalso
knownas, usually have lower
precision. The triples for this
predicate had a wide usage of
abbreviated terms (the stock
exchange codes for the
companies) and that accounts for
a lower precision value.
{ The Nell data is skewed
towards a particular region or
type. The triples involving
persons and sports primarily refer to basketball or baseball. Similarly, for
lakeinstate, nearly all the triples refer to lakes in United States.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Related Work</title>
      <p>
        Key contributions in information extraction have concentrated on minimizing
the amount of human supervision required in the knowledge harvesting process.
To this end, much work has explored unsupervised bootstrapping for a variety
of tasks, including the acquisition of binary relations [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], facts [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], semantic
class attributes and instances [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Open Information Extraction further focused
on approaches that do not need any manually-labeled data [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], however, the
output of these systems still needs to be disambiguated by linking it to entities
and relations from a knowledge base. Recent work has extensively explored the
usage of distant supervision for IE, namely by harvesting sentences containing
concepts whose relation is known and leveraging these sentences as training data
for supervised extractors [
        <xref ref-type="bibr" rid="ref14 ref27">27, 14</xref>
        ]. Talking of integration of open and closed IE
projects, it is worthwhile to mention the work of [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] where matrix factorization
technique was employed for extracting relations across di erent domains. They
proposed an universal schema which supports cross domain integration.
      </p>
      <p>
        There has been some work on instance matching in the recent past.
Researchers have transformed the task into a binary classi cation problem and
solved it with machine learning techniques [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Some have tried to enrich
unstructured data in form of text with Wikipedia entities [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. However, in our
approach we consider the context of the entities while creating the gold
standard which makes it bit di erent from these above mentioned entity linking
approaches. Also, there are tools like T palo [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] for automatic typing of
DBpedia entities. They use language de nitions from Wikipedia abstracts and use
WordNet in the background for disambiguation. PARIS [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] takes a
probabilistic approach to align ontologies utilizes the interdependence of instances and
schema to compute probabilities for the instance matches. Lin et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]
provide a novel approach to link entities across million documents. They take web
extracted facts and link the entities to Wikipedia by means of information from
Wikipedia itself, as well as additional features like string similarity, and most
importantly context information of the extracted facts. The Silk framework [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]
discovers missing links between entities across linked data sources by employing
similarity metrics between pairs of instances.
7
      </p>
    </sec>
    <sec id="sec-7">
      <title>Conclusions</title>
      <p>In this paper, we introduced a most-frequent-entity baseline algorithm in order
to link entities from an open domain system to a closed one. We introduced a
gold standard for this task and compared our baseline against it. In the near
future, we plan to extend this work with more complex and robust methods, as
well as extending our methodology to cover other open IE projects like ReVerb.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. Soren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and
          <string-name>
            <given-names>Zachary</given-names>
            <surname>Ive</surname>
          </string-name>
          .
          <article-title>Dbpedia: A nucleus for a web of open data</article-title>
          .
          <source>In Proceedings of 6th International Semantic Web Conference joint with 2nd Asian Semantic Web Conference (ISWC+ASWC</source>
          <year>2007</year>
          ), pages
          <fpage>722</fpage>
          {
          <fpage>735</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Christian</given-names>
            <surname>Bizer</surname>
          </string-name>
          , Jens Lehmann, Georgi Kobilarov, Soren Auer, Christian Becker, Richard Cyganiak, and
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Hellmann</surname>
          </string-name>
          .
          <article-title>DBpedia { A crystallization point for the web of data</article-title>
          .
          <source>Journal of Web Semantics</source>
          ,
          <volume>7</volume>
          (
          <issue>3</issue>
          ):
          <volume>154</volume>
          {
          <fpage>165</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Sergey</given-names>
            <surname>Brin</surname>
          </string-name>
          .
          <article-title>Extracting patterns and relations from the world wide web</article-title>
          .
          <source>In Selected papers from the International Workshop on The World Wide Web and Databases</source>
          ,
          <source>WebDB '98</source>
          , pages
          <fpage>172</fpage>
          {
          <fpage>183</fpage>
          . Springer-Verlag,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Razvan</given-names>
            <surname>Bunescu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Marius</given-names>
            <surname>Pasca</surname>
          </string-name>
          .
          <article-title>Using encyclopedic knowledge for named entity disambiguation</article-title>
          .
          <source>In Proc. of EACL-06</source>
          , pages
          <fpage>9</fpage>
          {
          <fpage>16</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Andrew</given-names>
            <surname>Carlson</surname>
          </string-name>
          , Justin Betteridge, Bryan Kisiel, Burr Settles,
          <string-name>
            <surname>Estevam R. Hruschka</surname>
            , and
            <given-names>Tom M.</given-names>
          </string-name>
          <string-name>
            <surname>Mitchell</surname>
          </string-name>
          .
          <article-title>Toward an architecture for never-ending language learning</article-title>
          .
          <source>In Proc. of AAAI-10</source>
          , pages
          <fpage>1306</fpage>
          {
          <fpage>1313</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Olivier</given-names>
            <surname>Chapelle</surname>
          </string-name>
          , Bernhard Schlkopf, and
          <string-name>
            <given-names>Alexander</given-names>
            <surname>Zien</surname>
          </string-name>
          .
          <article-title>Semi-Supervised Learning</article-title>
          . MIT Press,
          <source>1st edition</source>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Oren</given-names>
            <surname>Etzioni</surname>
          </string-name>
          .
          <article-title>Search needs a shake-up</article-title>
          .
          <source>Nature</source>
          ,
          <volume>476</volume>
          (
          <issue>7358</issue>
          ):
          <volume>25</volume>
          {
          <fpage>26</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Oren</given-names>
            <surname>Etzioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Cafarella</surname>
          </string-name>
          , Doug Downey, Stanley Kok,
          <string-name>
            <surname>Ana-Maria</surname>
            <given-names>Popescu</given-names>
          </string-name>
          , Tal Shaked, Stephen Soderland, Daniel S. Weld, and
          <string-name>
            <given-names>Alexander</given-names>
            <surname>Yates</surname>
          </string-name>
          .
          <article-title>Web-scale information extraction in KnowItAll: (preliminary results)</article-title>
          .
          <source>In Proc. of WWW '04</source>
          , pages
          <fpage>100</fpage>
          {
          <fpage>110</fpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Anthony</given-names>
            <surname>Fader</surname>
          </string-name>
          , Stephen Soderland, and
          <string-name>
            <given-names>Oren</given-names>
            <surname>Etzioni</surname>
          </string-name>
          .
          <article-title>Identifying relations for open information extraction</article-title>
          .
          <source>In Proc. of EMNLP-11</source>
          , pages
          <fpage>1535</fpage>
          {
          <fpage>1545</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>Evgeniy</given-names>
            <surname>Gabrilovich</surname>
          </string-name>
          and
          <string-name>
            <given-names>Shaul</given-names>
            <surname>Markovitch</surname>
          </string-name>
          .
          <article-title>Overcoming the brittleness bottleneck using wikipedia: Enhancing text categorization with encyclopedic knowledge</article-title>
          .
          <source>In Proc. of AAAI-06</source>
          , pages
          <fpage>1301</fpage>
          {
          <fpage>1306</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>Evgeniy</given-names>
            <surname>Gabrilovich</surname>
          </string-name>
          and
          <string-name>
            <given-names>Shaul</given-names>
            <surname>Markovitch</surname>
          </string-name>
          .
          <article-title>Computing semantic relatedness using wikipedia-based explicit semantic analysis</article-title>
          .
          <source>In Proc. of IJCAI-07</source>
          , pages
          <fpage>1606</fpage>
          {
          <fpage>1611</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Aldo</surname>
            <given-names>Gangemi</given-names>
          </string-name>
          , AndreaGiovanni Nuzzolese, Valentina Presutti, Francesco Draicchio, Alberto Musetti, and
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Ciancarini</surname>
          </string-name>
          .
          <article-title>Automatic typing of DBpedia entities</article-title>
          .
          <source>In The Semantic Web { ISWC</source>
          <year>2012</year>
          , volume
          <volume>7649</volume>
          of Lecture Notes in Computer Science, pages
          <volume>65</volume>
          {
          <fpage>81</fpage>
          . Springer, Berlin and Heidelberg,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. Johannes Ho art,
          <string-name>
            <surname>Fabian M. Suchanek</surname>
          </string-name>
          , Klaus Berberich, and Gerhard Weikum.
          <article-title>YAGO2: A spatially and temporally enhanced knowledge base from Wikipedia</article-title>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>194</volume>
          :
          <fpage>28</fpage>
          {
          <fpage>61</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. Raphael Ho mann, Congle Zhang, and
          <string-name>
            <surname>Daniel</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Weld</surname>
          </string-name>
          .
          <article-title>Learning 5000 relational extractors</article-title>
          .
          <source>In Proc. of ACL-10</source>
          , pages
          <fpage>286</fpage>
          {
          <fpage>295</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Eduard</surname>
            <given-names>Hovy</given-names>
          </string-name>
          , Roberto Navigli, and Simone Paolo Ponzetto.
          <article-title>Collaboratively built semi-structured content and Arti cial Intelligence: The story so far</article-title>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>194</volume>
          :2{
          <fpage>27</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Thomas</surname>
            <given-names>Lin</given-names>
          </string-name>
          , Mausam, and
          <string-name>
            <given-names>Oren</given-names>
            <surname>Etzioni</surname>
          </string-name>
          .
          <article-title>Entity linking at web scale</article-title>
          .
          <source>In Proc. of AKBC-WEKEX '12</source>
          , pages
          <fpage>84</fpage>
          {
          <fpage>88</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Pablo</surname>
            <given-names>Mendes</given-names>
          </string-name>
          , Max Jakob, and
          <string-name>
            <given-names>Christian</given-names>
            <surname>Bizer</surname>
          </string-name>
          .
          <article-title>Dbpedia: A multilingual crossdomain knowledge base</article-title>
          .
          <source>In Proc. of LREC-12</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. David Milne and
          <string-name>
            <given-names>Ian H.</given-names>
            <surname>Witten</surname>
          </string-name>
          .
          <article-title>Learning to link with wikipedia</article-title>
          .
          <source>In Proc. of CIKM '08</source>
          , pages
          <fpage>509</fpage>
          {
          <fpage>518</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <article-title>Roberto Navigli and Simone Paolo Ponzetto</article-title>
          .
          <article-title>BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network</article-title>
          .
          <source>Arti cial Intelligence</source>
          ,
          <volume>193</volume>
          :
          <fpage>217</fpage>
          {
          <fpage>250</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>Marius</given-names>
            <surname>Pasca</surname>
          </string-name>
          .
          <article-title>Organizing and searching the world wide web of facts { step two: harnessing the wisdom of the crowds</article-title>
          .
          <source>In Proc. of WWW '07</source>
          , pages
          <fpage>101</fpage>
          {
          <fpage>110</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Sebastian</surname>
            <given-names>Riedel</given-names>
          </string-name>
          , Limin Yao,
          <string-name>
            <surname>Benjamin M. Marlin</surname>
          </string-name>
          , and
          <string-name>
            <surname>Andrew McCallum</surname>
          </string-name>
          .
          <article-title>Relation extraction with matrix factorization and universal schemas</article-title>
          .
          <source>In Joint Human Language Technology Conference/Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL '13)</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Shu</surname>
            <given-names>Rong</given-names>
          </string-name>
          , Xing Niu, EvanWei Xiang, Haofen Wang,
          <string-name>
            <surname>Qiang Yang</surname>
            , and
            <given-names>Yong</given-names>
          </string-name>
          <string-name>
            <surname>Yu</surname>
          </string-name>
          .
          <article-title>A machine learning approach for instance matching based on similarity metrics</article-title>
          .
          <source>In The Semantic Web { ISWC</source>
          <year>2012</year>
          , volume
          <volume>7649</volume>
          of Lecture Notes in Computer Science, pages
          <volume>460</volume>
          {
          <fpage>475</fpage>
          . Springer, Berlin and Heidelberg,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Valentin</surname>
            <given-names>I. Spitkovsky</given-names>
          </string-name>
          and
          <string-name>
            <given-names>Angel X.</given-names>
            <surname>Chang</surname>
          </string-name>
          .
          <article-title>A cross-lingual dictionary for english wikipedia concepts</article-title>
          .
          <source>In Proc of LREC-12</source>
          , pages
          <fpage>3168</fpage>
          {
          <fpage>3175</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Fabian M. Suchanek</surname>
            , Serge Abiteboul, and
            <given-names>Pierre</given-names>
          </string-name>
          <string-name>
            <surname>Senellart</surname>
          </string-name>
          . Paris:
          <article-title>probabilistic alignment of relations, instances, and schema</article-title>
          .
          <source>Proc. VLDB Endow</source>
          .,
          <volume>5</volume>
          (
          <issue>3</issue>
          ):
          <volume>157</volume>
          {
          <fpage>168</fpage>
          ,
          <string-name>
            <surname>November</surname>
          </string-name>
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Fabian M. Suchanek</surname>
            , Gjergji Kasneci, and
            <given-names>Gerhard</given-names>
          </string-name>
          <string-name>
            <surname>Weikum</surname>
          </string-name>
          .
          <article-title>Yago: A Core of Semantic Knowledge</article-title>
          .
          <source>In 16th international World Wide Web conference (WWW</source>
          <year>2007</year>
          ), New York, NY, USA,
          <year>2007</year>
          . ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Julius</surname>
            <given-names>Volz</given-names>
          </string-name>
          , Christian Bizer,
          <string-name>
            <given-names>Martin</given-names>
            <surname>Gaedke</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Georgi</given-names>
            <surname>Kobilarov. Silk - A Link Discovery</surname>
          </string-name>
          <article-title>Framework for the Web of Data</article-title>
          .
          <source>In Proc. of LDOW '09</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Fei</surname>
          </string-name>
          Wu and Daniel S. Weld.
          <article-title>Open information extraction using Wikipedia</article-title>
          .
          <source>In Proc. of ACL-10</source>
          , pages
          <fpage>118</fpage>
          {
          <fpage>127</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>