<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Making Sense of Microposts (#MSM2013) Concept Extraction Challenge</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Amparo Elizabeth Cano Basave</string-name>
          <email>a.cano_basave@aston.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Varga</string-name>
          <email>a.varga@dcs.shef.ac.uk</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Matthew Rowe</string-name>
          <email>m.rowe@lancaster.ac.uk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Milan Stankovic</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aba-Sah Dadzie</string-name>
          <email>a.dadzie@cs.bham.ac.uk</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>KMi, The Open University</institution>
          ,
          <addr-line>Milton Keynes</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computing and Communications, Lancaster University</institution>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>SØpage</institution>
          ,
          <addr-line>Paris</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>The OAK Group, Dept. of Computer Science, The University of Sheeld</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Microposts are small fragments of social media content that have been published using a lightweight paradigm (e.g. Tweets, Facebook likes, foursquare check-ins). Microposts have been used for a variety of applications (e.g., sentiment analysis, opinion mining, trend analysis), by gleaning useful information, often using third-party concept extraction tools. There has been very large uptake of such tools in the last few years, along with the creation and adoption of new methods for concept extraction. However, the evaluation of such eorts has been largely consigned to document corpora (e.g. news articles), questioning the suitability of concept extraction tools and methods for Micropost data. This report describes the Making Sense of Microposts Workshop (#MSM2013) Concept Extraction Challenge, hosted in conjunction with the 2013 World Wide Web conference (WWW'13). The Challenge dataset comprised a manually annotated training corpus of Microposts and an unlabelled test corpus. Participants were set the task of engineering a concept extraction system for a dened set of concepts. Out of a total of 22 complete submissions 13 were accepted for presentation at the workshop; the submissions covered methods ranging from sequence mining algorithms for attribute extraction to part-of-speech tagging for Micropost cleaning and rule-based and discriminative models for token classication. In this report we describe the evaluation process and explain the performance of dierent approaches in dierent contexts.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Since the rst Making Sense of Microposts (#MSM) workshop at the Extended
Semantic Web Conference in 2011 through to the most recent workshop in 2013
we have received over 60 submissions covering a wide range of topics related
to interpreting Microposts and (re)using the knowledge content of Microposts.
One central theme that has run through such work has been the need to
understand and learn from Microposts (social network-based posts that are small
in size and published using minimal eort from a variety of applications and
on dierent devices), so that such information, given its public availability and
ease of retrieval, can be reused in dierent applications and contexts (e.g. music
recommendation, social bots, news feeds). Such usage often requires identifying
entities or concepts in Microposts, and extracting them accordingly. However
this can be hindered by:
(i) the noisy lexical nature of Microposts, where terminology diers between
users when referring to the same thing and abbreviations are commonplace;
(ii) the limited length of Microposts, which restricts the contextual information
and cues that are available in normal document corpora.</p>
      <p>
        The exponential increase in the rate of publication and availability of
Microposts (Tweets, FourSquare check-ins, Facebook status updates, etc.), and
applications used to generate them, has led to an increase in the use of third-party
entity extraction APIs and tools. These function by taking as input a given
text, identifying entities within them, and extracting entity type-value tuples.
Rizzo &amp; Troncy [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] evaluated the performance of entity extraction APIs over
news corpora, assessing the performance of extraction and entity
disambiguation. This work has been invaluable in providing a reference point for judging
the performance of extraction APIs over well-structured news data. However, an
assessment of the performance of extraction APIs over Microposts has yet to be
performed.
      </p>
      <p>This prompted the Concept Extraction Challenge held as part of the
Making Sense of Microposts Workshop (#MSM2013) at the 2013 World Wide Web
Conference (WWW’13) . The rationale behind this was that such a challenge,
in an open and competitive environment, would encourage and advance novel,
improved approaches to extracting concepts from Microposts. This report
describes the #MSM2013 Concept Extraction Challenge, collaborative annotation
of the corpus of Microposts and our evaluation of the performance of each
submission. We also describe the approaches taken in the systems entered using
both established and developing alternative approaches to concept extraction,
how well they performed, and how system performance diered across concepts.
The resulting body of work has implications for researchers interested in the
task of extracting information from social data, and for application designers
and engineers who wish to harvest information from Microposts for their own
applications.
2</p>
    </sec>
    <sec id="sec-2">
      <title>The Challenge</title>
      <p>We begin by describing the goal of the challenge and the task set, and the process
we followed to generate the corpus of Microposts. We conclude this section with
the list of submissions accepted.</p>
      <sec id="sec-2-1">
        <title>The Task and Goal</title>
        <p>The challenge required participants to build semi-automated systems to identify
concepts within Microposts and extract matching entity types for each concept
identied, where concepts are dened as abstract notions of things. In order to
focus the challenge we restricted the classication to four entity types:
(i) Person PER, e.g. Obama;
(ii) Organisation ORG, e.g. NASA;
(iii) Location LOC, e.g. New York;
(iv) Miscellaneous MISC, consisting of the following: lm/movie,
entertainment award event, political event, programming language, sporting event
and TV show.</p>
        <p>Submissions were required to recognise these entity types within each
Micropost, and extract the corresponding entity type-value tuples from the Micropost.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Consider the following example, taken from our annotated corpus:</title>
        <p>8 7 0 , 0 0 0 p e o p l e i n canada depend on #f o o d b a n k s</p>
        <p>25% i n c r e a s e i n t h e l a s t 2 y e a r s p l e a s e g i v e g e n e r o u s l y
The fourth token in this Micropost refers to the location Canada ; an entry to the
challenge would be required to spot this token and extract it as an annotation,
as:</p>
      </sec>
      <sec id="sec-2-3">
        <title>LOC/ canada ;</title>
        <p>The complete description of concept types and their scope, and additional
examples can be found on the challenge website 5, and also in the appendices in
the challenge proceedings.</p>
        <p>To encourage competitiveness we solicited sponsorship for the winning
submission. This was provided by the online auctioning web site eBay 6, who oered
a $1500 prize for the winning entry. This generous sponsorship is testimony to
the growing industry interest in issues related to automatic understanding of
short, predominantly textual posts Microposts; challenges faced by major
Social Web and other web sites, and increasingly, marketing and consumer analysts
and customer support across industry, government, state and not-for-prots
organisations around the world.
2.2</p>
      </sec>
      <sec id="sec-2-4">
        <title>Data Collection and Annotation</title>
        <p>The dataset consists of the message elds of each of 4341 manually annotated
Microposts, on a variety of topics, including comments on the news and politics,
collected from the end of 2010 to the beginning of 2011, with a 60% / 40% split
between training and test data. The annotation of each Micropost in the training
dataset gave all participants a common base from which to learn extraction
patterns. The test dataset contained no annotations; the challenge task was for</p>
      </sec>
      <sec id="sec-2-5">
        <title>5 http://oak.dcs.shef.ac.uk/msm2013/challenge.html</title>
      </sec>
      <sec id="sec-2-6">
        <title>6 http://www.ebay.com</title>
        <p>participants to provide these. The complete dataset, including a list of changes
and the gold standard, is available on the #MSM2013 challenge web pages 7,
accessible under the Creative Commons Attribution-NonCommercial-ShareAlike</p>
      </sec>
      <sec id="sec-2-7">
        <title>3.0 Unported License.</title>
        <p>To assess the performance of the submissions we used an underlying ground
truth, or gold standard. In the rst instance, the dataset was annotated by two
of the authors of this report. Subsequent to this we logged corrections to the
annotations in the training data submitted by participants, following which we
release an updated dataset. After this, based on a recommendation, we set up
a GitHub repository to simplify collaborative annotation of the dataset. Four of
the authors of this report then annotated a quarter of the dataset each, and then
checked the annotations that the other three had performed to verify correctness.
For those entries for which consensus was not reached, discussion between all four
annotators was used to come to a nal conclusion. This process resulted in better
quality and higher consensus in the annotations. A very small number of errors
was reported subsequent to this; a nal submission version with these corrections
was used by participants for their last set of experiments and to submit their
nal results.</p>
        <p>Figure 1 presents the entity type distributions over the training set, test set
and over the entire corpus.</p>
        <p>0
0
5
2
0
0
5
1
0
0
5
0
train
test
all
MISC</p>
        <p>PER</p>
        <p>ORG</p>
        <p>LOC</p>
      </sec>
      <sec id="sec-2-8">
        <title>7 http://oak.dcs.shef.ac.uk/msm2013/ie_challenge</title>
      </sec>
      <sec id="sec-2-9">
        <title>Challenge Submissions</title>
        <p>
          Twenty-two complete submissions were received for the challenge; each of which
consisted of a short paper explaining the system’s approach, and up to three
dierent test set annotations generated by running the system with dierent
settings. After peer review, thirteen submissions were accepted; for each, the
submission run with the best overall performance was taken as the result of the
system, and used in the rankings. The accepted submissions are listed in Table 1,
with the run taken as the result set for each.
Participants approached the concept extraction task with rule-based, machine
learning and hybrid methods. A summary of each approach can be found in
Figure 2, with detail in the author descriptions that follow this report. We compared
these approaches according to various dimensions: state of the art (SoA) named
entity recognition (NER) features employed (columns 4-11) ([
          <xref ref-type="bibr" rid="ref13 ref6">13,6</xref>
          ]), classiers
used for both extraction and classication of entities (columns 12-13), additional
linguistic knowledge sources used (column 14), special pre-processing steps
performed (column 15), other non-SoA NER features used (column 16), and nally,
the list of o-the-shelf systems incorporated (column 17).
        </p>
        <p>From the results and participants’ experiments we make a number of
observations. With regard to the strategy employed, the best performing systems (from
the top, 14, 21, 15, 25), based on overall F 1 score (see Section 3), were hybrid.
,,# ssSp L Sp sea , is</p>
        <p>e
,@ iM ,RU issM reCw tcnuP ltiaap
a
lS R uP
]
7
[]8g ][4 I[PA
lireen LTNK lteeN
F b
a
B
t
e
N
l
e
iad i ab
ep ik ,B
B W ia
D ed
p
B</p>
        <p>D
s s s
le le le
u u u
R R R
s s
le le leN SD
u u
R R ab W
t
e</p>
        <p>B
s
m
e
t
s
y
S
l
a
n
r
e
t
x
E
.
p
e
r
P
s
er re
th tu</p>
        <p>a
O e</p>
        <p>F
ic eg
t
s ed
iu l
g w
n o
i n
L K
n
o
ti
d ca
e i
s if
u ss
r la
ie C
f
i
s n
s o
la itc</p>
        <p>a
C tr
x
E
tonC tex
p
u
k
o
o
l
t
s
i</p>
        <p>L
se lca tax
r oL syn
u
t
a
e
f
t n
r o
-a itc
e n
h u
t F
f
o
e
t
a
t S
S O</p>
        <p>P
.
p
r
o
M
n
e
k
o
T
n
i
a
r
T
h r x on tu
t u e i</p>
        <p>t t t ea
y a n c
b e o a F</p>
        <p>f
ed tr (</p>
        <p>C tr r</p>
        <p>e
x</p>
        <p>h
y a e e t
o
l e iz t</p>
        <p>O
p h s p (
m t e</p>
        <p>c s
e f w n r</p>
        <p>o o o
s o t
e e d c c
i n
g t i r a</p>
        <p>a r
te t w fo t</p>
        <p>s x
a t
r , x d e
t )
s n te se e</p>
        <p>i
e a n u th
h r o
t e r</p>
        <p>T C g o
o ( , d f
t s</p>
        <p>p le
r d
d o u w e
n t k o s
o c o n u
sp ra lo k s
e t e</p>
        <p>c r
r x ts i
r e i t u
co t L is ta</p>
        <p>p , u e
s e t g f
n c x n l</p>
        <p>n e il a
m o t n
lu c n l</p>
        <p>o
o a i
o e c n it
.C th l io d
n
n ca it d</p>
        <p>o d a
o i
i d ,
t ra L a .)
ca t , , p</p>
        <p>n
o
t t i o
x t i P</p>
        <p>t (
E d c</p>
        <p>a
e n
t s</p>
        <p>c a
u i</p>
        <p>s t</p>
        <p>F s a
p u
e
c a , a d
n t ) l</p>
        <p>c e
o a S
d</p>
        <p>h
C O d t .</p>
        <p>e n )
3 h (P a n s
1 t o</p>
        <p>)
0
2 ,) ch n d te</p>
        <p>o e s
e i
M m e t m y
S te p c r S
M s -s ra fo l</p>
        <p>y f t r a
# (S -o x e n
r</p>
        <p>m P n p x
s e E
e te , io t (</p>
        <p>M
a f (
o
y
t o e
d i r
e d gy t p s
t i</p>
        <p>c m
n e
o e r m
e l</p>
        <p>p te
om th oh th , s
t
p o ) y</p>
        <p>e s
,
u ) r
A y o</p>
        <p>g r d le
. e M fo le s
2 t ,</p>
        <p>w e
lloo FW
F
1
1
0
2
s
o
P
w
T
m
e
t
S
,ix ix
f f</p>
        <p>f
reP uS
p p
a a
C C
Is Is
agY tfsoo roW</p>
        <p>F
F+ RB
CR VM</p>
        <p>S
1
1
0
2
s
o
P
w
T
,ap ap
sC llC
I A
W
T
,P P
isN isV
W
T
a
i
d
e
p
B
D
F
R
C
ia th
epd lit</p>
        <p>
          g
B po
D S
2
,
re re
e e
tt t
e te
az az
G G
iad IE
epBD LBA
,IsapC ,llapCA reoLw seaC
m
ra
g
N
W
T
The success of these models appears to rely on the application of o-the-shelf
systems (e.g. AIDA [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], ANNIE [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], OpenNLP 8, Illinois NET [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], Illinois
Wikier [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], LingPipe 9, OpenCalais10, StanfordNER [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], WikiMiner11, NERD [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ],
TWNer [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], Alchemy12, DBpedia Spotlight[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]13, Zemanta14) for either entity
extraction (identifying the boundaries of an entity) or classication (assigning a
semantic type to an entity). For the best performing system (14), the complete
concept classication component was executed by the (existing) concept
disambiguation tool AIDA. Other systems (21, 15, 25), on the other hand, made use
of the output of multiple o-the-shelf systems, resulting in additional features
(such as the condence scores of each individual NER extractors ConfScores)
for the nal concept extractors, balancing in this way the contribution of existing
extractors.
        </p>
        <p>Among the rule-based approaches, the winning strategy was also similar.
Submission 20 achieved the fourth best result overall, by taking an existing
rule-based system (ANNIE), and simply increasing the coverage of captured
entities by building new gazetteers 15. We also nd that for entity extraction
the participants used both rule-based and statistical approaches. Considering
current state of the art approaches, statistical models are able to handle this
task well.</p>
        <p>
          Looking at features, the gazetteer membership and part-of-speech (POS)
features played an important role; the best systems include these. For the gazetteers,
a large number of dierent resources were used, including Yago, WordNet,
DBpedia, Freebase, Microsoft N-grams and Google. Existing POS taggers were trained
on newswire text (e.g. ANNIEPos [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], NLTKPos [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], POS trained on Treebank
corpus (PosTreebank), Freeling [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]). Additionally, there appears to be a trend on
incorporating recent POS taggers trained on Micropost data (e.g. TwPos2011
[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]).
        </p>
        <p>
          Considering pre-processing of Microposts, we nd the following:
removal of Twitter-specic markers, e.g. hashtags ( #), mentions (@), retweets
(RT),
removal of external URL links within Microposts ( URL),
removal of punctuation marks ( Punct), e.g. points, brackets,
removal of well-known slang words using dictionaries 16 (Slang), e.g. lol,
tmr, unlikely to refer to named entities,
8 http://opennlp.apache.org
9 http://alias-i.com/lingpipe
10 http://www.opencalais.com
11 http://wikipedia-miner.cms.waikato.ac.nz
12 http://www.alchemyapi.com
13 http://dbpedia.org/spotlight
14 http://www.zemanta.com
15 Another o-the-shelf entity extractor employed was BabelNet API [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], in submission
32.
16 http://www.noslang.com/dictionary/full
http://www.chatslang.com/terms/twitter
http://www.chatslang.com/terms/facebook
removal of words representing exaggerative emotions ( MissSpell), e.g. nooooo,
goooooood, hahahaha,
transformation of each word to lowercase ( LowerCase),
capitalisation of the rst letter of each word ( Capitalise).
        </p>
        <p>
          With respect to the data used for training the entity extractors, the majority
of submissions utilised the challenge training dataset, containing annotated
Micropost data (TW) alone. A single submission, (3, the sixth best system overall),
made use of a large silver dataset (CoNLL 2003 [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], ACE 2004 and ACE 2005 17)
with the training dataset annotations, and achieved the best performance among
the statistical methods.
3
3.1
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Evaluation of Challenge Submissions</title>
      <sec id="sec-3-1">
        <title>Evaluation Measures</title>
        <p>The evaluation involved assessing the correctness of a system ( S), in terms of
the performance of the system’s entity type classiers when extracting entities
from the test set ( T S). For each instance in T S, a system must provide a set of
tuples of the form: (entity type, entity value). The evaluation compared these
output tuples against those in the gold standard ( GS). The metrics used to
evaluate these tuples were the standard precision ( P ), recall (R) and f-measure
(F1), calculated for each entity type. The nal result for each system was the
average performance across the four dened entity types.</p>
        <p>To assess the correctness of the tuples of an entity type t provided by a
system S, we performed a strict match between the tuples submitted and those
in the GS. We consider a strict match as one in which there is an exact match,
with conversion to lowercase, between a system value and the GS value for a
given entity type t. Let (x; y) 2 St denote the set of tuples extracted for entity
type t by system S, (x; y) 2 GSt denote the set of tuples for entity type t in the
gold standard. We dene the set of True Positives ( T P ), False Positives ( F P )
and False Negatives ( F N ) for a given system as:</p>
        <p>T Pt = f(x; y) j (x; y) 2 (St \ GSt)g
F Pt = f(x; y) j (x; y) 2 St ^ (x; y) 2= GStg</p>
        <p>F Nt = f(x; y) j (x; y) 2 GSt ^ (x; y) 2= Stg</p>
        <p>Therefore T Pt denes the set of true positives considering the entity type
and value of tuples; F Pt is the set of false positives considering the unexpected
results for an entity type t; F Nt is the set of false negatives denoting the entities
that were missed by the extraction system, yet appear within the gold standard.
As we require matching of the tuples (x; y) we are looking for strict extraction
matches, this means that a system must both detect the correct entity type ( x)
17 the ACE Program: http://projects.ldc.upenn.edu/ace
(1)
(2)
(3)
and extract the correct matching entity value ( y) from a Micropost. From this
set of denitions we dene precision ( Pt) and recall (Rt) for a given entity type
t as follows:</p>
        <p>As we compute the precision and recall on a per-entity-type basis, we dene
the average precision and recall of a given system S, and the harmonic mean,</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>F1 between these measures:</title>
      <p>Pt =
Rt =</p>
      <p>jT Ptj
jT Pt [ F Ptj</p>
      <p>jT Ptj
jT Pt [ F Ntj
P = PP ER + PORG + PLOC + PMISC</p>
      <p>4
R =</p>
      <p>RP ER + RORG + RLOC + RMISC</p>
      <p>4
(4)
(5)
(6)
(7)
(8)
We report the dierences in performance between participants’ systems, with
a focus on the dierences in performance by entity type. The following
subsections report results of the evaluated systems in terms of precision, recall and</p>
      <sec id="sec-4-1">
        <title>F-measure, following the metrics dened in subsection 3.1.</title>
        <p>Precision. We begin by discussing the performance of the submissions in terms
of precision. Precision measures the accuracy, or ‘ purity ’, of the detected entities
in terms of the proportion of false positives within the returned set: high
precision equates to a low false positive rate. Table 3.2 shows that hybrid systems are
the top 4 ranked systems (in descending order, 14, 21, 30, 15), suggesting that
a combination of rules and data-driven approaches yields increased precision.
Studying the features of the top-performing systems, we note that maintaining
capitalisation is correlated with high precision. There is, however, clear
variance in other techniques used (classiers, extraction methods, etc.) between the
systems.</p>
        <p>Fine-grained insight into the disparities between precision performance was
obtained by inspecting the performance of the submissions across the dierent
concept types (person, organisation, location, miscellaneous). Figure 3a presents
the distribution of precision values across these four concept types and the macro
average of these values. We nd that systems do well (above the median of
average precision values) for person and location concepts, and perform worse than
the median for organisations and miscellaneous. For the entity type ‘
miscellaneous ’, this is not surprising as it features a fairly nuanced denition, including
lms and movies, entertainment award events, political events, programming
languages, sporting events and TV shows. We also note that several submissions
used gazetteers in their systems, many of which were for locations; this could
have contributed to the higher precision values for location concepts.
Recall. Although precision aords insight into the accuracy of the entities
identied across dierent concept types, it does not allow for inspecting the detection
rate over all possible entities. To facilitate this we also report the recall scores
of each submission, providing an assessment of the entity coverage of each
approach. Table 3 presents the overall recall values for each system and for each and
across all concept types. Once again, as with precision, we note that hybrid
systems (21, 15, 14) appear at the top of the rankings, with a rule-based approach
(20) and a data driven approach (3) coming fourth and fth respectively.</p>
        <p>Looking at the distribution of recall scores across the submissions in
Figure 3c we see a similar picture as before when inspecting the precision plots.
For instance, for the person and location concepts we note that the
submissions exceed the median of all concepts (when the macro-average of the recall
scores is taken), while for organisation and miscellaneous lower values than the
median are observed. This again comes back to the nuanced denition of the
miscellaneous category, although the recall scores are higher on average than
the precision score. The availability of person name and place name gazetteers
also benets identication of the corresponding concept types. This suggests
that additional eort is needed to improve the organisation concept extraction
and to provide information to seed the detection process, for instance through
the provision of organisation name gazetteers. Interestingly, when we look at the
best performing system in terms of recall over the organisation concept we nd
that submission 14 uses a variety of third party lookup lists (Yago, Microsoft
ngrams and Wordnet), suggesting that this approach leads to increased coverage
and accuracy when extracting organisation names.
F-Measure (F1). By combining the precision and recall scores together for the
individual systems using the f-measure (F 1) score we are provided with an overall
assessment of concept extraction performance. Table 4 presents the f-measure
(F1) score for each submission and performance across the four concept types. We
note that, as previously, hybrid systems do best overall (top-3 places), indicating
that a combination of rules and data-driven approaches yields the best results.
Submission 14 records the highest overall F 1 score, and also the highest scores
for the person and organisation concept types; submission 15 records the highest
F1 score for the location concept type; while submission 21 yields the highest F 1
score for the miscellaneous concept type. Submission 15 uses Google Gazetteers
together with part-of-speech tagging of noun and verb phrases, suggesting that
this combination yields promising results for our nuanced miscellaneous concept
type.</p>
        <p>Figure 3e shows the distribution of F 1 scores across the concept types for each
submission. We nd, as before, that the systems do well for person and location
and poorly for organisation and miscellaneous. The reasons behind the reduced
performance for these latter two concept types are, as mentioned, attributable
to the availability of organisation information in third party lookup lists.
The aim of the MSM Concept Extraction Challenge was to foster an open
initiative for extracting concepts from Microposts. Our motivation for hosting the
challenge was born of the increased availability of third party extraction tools,
and their widespread uptake, but the lack of an agreed formal evaluation of their
accuracy when applied over Microposts, together with limited understanding of
how performance diers between concept types. The challenge’s task involved
the identication of entity types and value tuples from a collection of
Microposts. To our knowledge the entity annotation set of Microposts generated as a
result of the challenge, and thanks to the collaboration of all the participants, is
the largest annotation set of its type openly available online. We hope that this
will provide the basis for future eorts in this eld and lead to a standardised
evaluation eort for concept extraction from Microposts.</p>
        <p>The results from the challenge indicate that systems performed well which:
(i) used a hybrid approach, consisting of data-driven and rule-based techniques;
and (ii) exploited available lookup lists, such as place name and person name
gazetteers, and linked data resources. Our future eorts in the area of concept
extraction from Microposts will feature additional hosted challenges, with more
complex tasks, aiming to identify the dierences in performance between
disparate systems and their approaches, and inform users of extraction tools on the
suitability of dierent applications for dierent tasks and contexts.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>H.</given-names>
            <surname>Cunningham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Maynard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bontcheva</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Tablan</surname>
          </string-name>
          .
          <article-title>GATE: A framework and graphical development environment for robust NLP tools and applications</article-title>
          .
          <source>In Proceedings of the 40th Annual Meeting of the ACL</source>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Finkel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Grenager</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Manning</surname>
          </string-name>
          .
          <article-title>Incorporating non-local information into information extraction systems by gibbs sampling</article-title>
          .
          <source>In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics , ACL '05</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>K.</given-names>
            <surname>Gimpel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. O</given-names>
            <surname>'Connor</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. Das</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Mills</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Eisenstein</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Heilman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Yogatama</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Flanigan</surname>
            , and
            <given-names>N. A.</given-names>
          </string-name>
          <string-name>
            <surname>Smith.</surname>
          </string-name>
          <article-title>Part-of-speech tagging for twitter: Annotation, features, and experiments</article-title>
          .
          <source>In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies</source>
          , pages
          <fpage>4247</fpage>
          ,
          <string-name>
            <surname>Portland</surname>
          </string-name>
          , Oregon, USA,
          <year>June 2011</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>E.</given-names>
            <surname>Loper</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Bird</surname>
          </string-name>
          .
          <article-title>NLTK: The Natural Language Toolkit</article-title>
          .
          <source>In Proceedings of the ACL Workshop on Eective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics</source>
          , pages
          <fpage>6269</fpage>
          . Somerset, NJ: Association for Computational Linguistics,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Mendes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jakob</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garca-Silva</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Bizer</surname>
          </string-name>
          .
          <article-title>DBpedia spotlight: shedding light on the web of documents</article-title>
          .
          <source>In Proceedings of the 7th International Conference on Semantic Systems , I-Semantics '11</source>
          , pages
          <fpage>18</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>D.</given-names>
            <surname>Nadeau</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Sekine</surname>
          </string-name>
          .
          <article-title>A survey of named entity recognition and classication</article-title>
          .
          <source>Lingvisticae Investigationes</source>
          ,
          <volume>30</volume>
          (
          <issue>1</issue>
          ),
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>R.</given-names>
            <surname>Navigli</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Ponzetto</surname>
          </string-name>
          . Babelnet:
          <article-title>The automatic construction, evaluation and application of a wide-coverage multilingual semantic network</article-title>
          .
          <source>Artif</source>
          . Intell.,
          <volume>193</volume>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>L.</given-names>
            <surname>Padr</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.</given-names>
            <surname>Stanilovsky</surname>
          </string-name>
          .
          <article-title>Freeling 3.0: Towards wider multilinguality</article-title>
          .
          <source>In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)</source>
          , pages
          <fpage>24732479</fpage>
          , Istanbul, Turkey, May
          <year>2012</year>
          . ACL Anthology Identier:
          <fpage>L12</fpage>
          -
          <lpage>1224</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>L.</given-names>
            <surname>Ratinov</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Roth</surname>
          </string-name>
          .
          <article-title>Design challenges and misconceptions in named entity recognition</article-title>
          .
          <source>In Proceedings of the Thirteenth Conference on Computational Natural Language Learning</source>
          ,
          <source>CoNLL '09</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. L.
          <article-title>-</article-title>
          <string-name>
            <surname>A. Ratinov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Downey</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Anderson</surname>
          </string-name>
          .
          <article-title>Local and global algorithms for disambiguation to wikipedia</article-title>
          .
          <source>In ACL</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>A.</given-names>
            <surname>Ritter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Clark</surname>
          </string-name>
          , Mausam, and
          <string-name>
            <given-names>O.</given-names>
            <surname>Etzioni</surname>
          </string-name>
          .
          <article-title>Named entity recognition in tweets: An experimental study</article-title>
          .
          <source>In EMNLP</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12. G. Rizzo and
          <string-name>
            <given-names>R.</given-names>
            <surname>Troncy</surname>
          </string-name>
          .
          <article-title>NERD: evaluating named entity recognition tools in the web of data</article-title>
          .
          <source>In ISWC 2011, Workshop on Web Scale Knowledge Extraction (WEKEX'11)</source>
          ,
          <source>October 23-27</source>
          ,
          <year>2011</year>
          , Bonn, Germany , Bonn, GERMANY, 10
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>S.</given-names>
            <surname>Sarawagi</surname>
          </string-name>
          . Information extraction.
          <source>Foundations and Trends in Databases , 1:261 377</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>E. F.</given-names>
            <surname>Tjong Kim Sang and F. De Meulder</surname>
          </string-name>
          .
          <article-title>Introduction to the CoNLL-2003 shared task: language-independent named entity recognition</article-title>
          .
          <source>In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4 , CONLL '03</source>
          , pages
          <fpage>142147</fpage>
          .
          <source>Association for Computational Linguistics</source>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>M. A. Yosef</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Hoart</surname>
            , I. Bordino,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Spaniol</surname>
            , and
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Weikum. Aida</surname>
          </string-name>
          :
          <article-title>An online tool for accurate disambiguation of named entities in text and tables</article-title>
          .
          <source>PVLDB</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>