<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using Syntactic Dependencies and WordNet Classes for Noun Event Recognition</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yoonjae Jeong</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sung-Hyon Myaeng</string-name>
          <email>myaeng@kaist.ac.kr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Korea Advanced Institute of Science and Technology 291 Daehak-ro (373-1 Guseong-dong)</institution>
          ,
          <addr-line>Yuseong-gu, Daejeon 305-701</addr-line>
          ,
          <country>Republic of Korea</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The goal of this research is to devise a method for recognizing TimeML noun events in a more effective way. TimeML is the most recent annotation scheme for processing the event and temporal expressions in natural language processing fields. In this paper, we argue and demonstrate that the dependencies and the deep-level WordNet classes are useful for recognizing events. We formulate the event recognition problem as a classification task using various features including lexical semantic and dependency-based features. The experimental results show that our proposed method outperforms significantly a state-of-the-art approach. Our analysis of the results demonstrates that the dependencies of direct object and the deep-level WordNet hypernyms play pivotal roles for recognizing noun events.</p>
      </abstract>
      <kwd-group>
        <kwd>Event Recognition</kwd>
        <kwd>TimeML</kwd>
        <kwd>TimeBank</kwd>
        <kwd>WordNet</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Machine Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Automatic event extraction from text is one of the important parts in text mining
field. There are two types of definitions for events. In the area of topic detection and
tracking (TDT), an event is defined as an instance of a document level topic
describing something that has happened
        <xref ref-type="bibr" rid="ref1">(Allan 2002)</xref>
        . On the other hand, the information
extraction (IE) field uses a more fine-grained definition of an event, which is often
expressed by a word or phrase in a document. In TimeML, a recent annotation
scheme, events are defined as situations that happen or occur and expressed by verbs,
nominalizations, adjectives, predicative clauses or prepositional phrases
        <xref ref-type="bibr" rid="ref10 ref11 ref5">(Pustejovsky,
Castaño, et al. 2003)</xref>
        . In this paper, we follow the view of IE, and focus on
recognition of TimeML events.
      </p>
      <p>
        Previous studies have proposed different approaches for automatic recognition of
events, most notably adopting machine learning techniques based on lexical semantic
classes and morpho-syntactic information around events
        <xref ref-type="bibr" rid="ref2 ref3 ref7 ref8">(Bethard and Martin 2006;
Boguraev and Ando 2007; Llorens, Saquete, and Navarro-Colorado 2010; March and
Baldwin 2008; Sauríet al. 2005)</xref>
        . In recognizing events, some of the past work used
top level WordNet classes
        <xref ref-type="bibr" rid="ref4">(Fellbaum 1998)</xref>
        to represent the meanings of events. It
turns out, however, that such WordNet classes used as lexical semantic features are
not sufficient. When WordNet hypernyms within the top four levels
        <xref ref-type="bibr" rid="ref7">(Llorens,
Saquete, and Navarro-Colorado 2010)</xref>
        or some selected classes
        <xref ref-type="bibr" rid="ref2">(Bethard and Martin
2006)</xref>
        were used, they could not represent events well. For example, the WordNet
class event is a representative level-4 class expressing events, but just 28.46% of event
nouns, i.e., hyponyms of WordNet event class occurring in the TimeBank 1.2 corpus
are annotated as TimeML events. TimeBank is a corpus containing news articles
annotated based on the TimeML scheme
        <xref ref-type="bibr" rid="ref10 ref11 ref5">(Pustejovsky, Hanks, et al. 2003)</xref>
        .
      </p>
      <p>Events can be recognized in different part-of-speech. In this paper, we focus on
noun event recognition because the previous approaches showed low performances
for recognizing noun events although nouns cover about 28% of all the events,
according to our data analysis. For the problem of recognizing event nouns, we propose
a method of using dependency-based features that exist between an event noun and its
syntactically related words. In addition, we chose to use deeper level WordNet classes
than those at the top-4 levels as in the previous work. We show that our proposed
method outperforms the previous work by running experiments.</p>
      <p>The rest of the paper is organized as follows. Section 2 introduces TimeML and
TimeBank corpus as a representation and annotation scheme and as a test bed,
respectively. It is followed by a discussion of related work for TimeML-based event
recognition in Section 3. Section 4 presents our event recognition method using the
deeplevel WordNet classes and the dependency-based features. We then discuss our
experiments and results in Section 5. Finally, the last section presents our conclusions.
2</p>
    </sec>
    <sec id="sec-2">
      <title>TimeML and TimeBank Corpus</title>
      <p>
        TimeML is a robust specification language for event and temporal expressions in
natural language
        <xref ref-type="bibr" rid="ref10 ref11 ref5">(Pustejovsky, Castaño, et al. 2003)</xref>
        . It was first announced in 2002 in
an extended workshop called TERQAS (Time and Event Recognition for Question
Answering System)1. It addresses four basic problems:
1. Time stamping of events (identifying an event and anchoring it in time)
2. Ordering events with respect to one another (lexical versus discourse properties
of ordering)
3. Reasoning with contextually underspecified temporal expressions (temporal
functions such as “last week” and “two weeks before”)
4. Reasoning about the persistence of events (how long does an event or the
outcome of an event last)
      </p>
      <p>
        There are four major data components in TimeML: EVENT, TIMEX3, SIGNAL,
and LINK
        <xref ref-type="bibr" rid="ref12">(Pustejovsky et al. 2007)</xref>
        . TimeML considers event as a term for situations
1 http://www.timeml.org/site/terqas/index.html
that happen or occur or elements describing states or circumstances in which
something obtains or holds the truth (EVENT). Temporal expressions in TimeML are
marked up with the TIMEX3 tags referring to dates, durations, sets of times, etc. The
tag SIGNAL is used to annotate function words, which indicates how temporal
objects (event and temporal expressions) are to be related to each other. The last
component, LINK, describes the temporal (TLINK), subordinate (SLINK), and aspectual
relationship (ALINK) between temporal objects.
      </p>
      <p>Fig. 2 shows an example of TimeML annotation. For an event “teaches”, its type is
kept in class attribute, and its tense and aspect information is tagged in
MAKEINSTANCE. The normalized value of temporal expressions “3:00” and
“November 22, 2004” are stored in value attribute in TIMEX3 tag. The signal words “at”
and “on” make links between events and temporal expressions through TLINK tags.</p>
      <p>John
&lt;EVENT eid="e1" class="OCCURRENCE"&gt; teaches &lt;/EVENT&gt;
&lt;MAKEINSTANCE eiid="ei1" eventID="e1" tense="PRESENT"
aspect="NONE" /&gt;
&lt;SIGNAL sid="s1"&gt; at &lt;/SIGNAL&gt;
&lt;TIMEX3 tid="t1" type="TIME" value="2004-11-22T15:00"
temporalFunction="TRUE" anchorTimeID="t2"&gt; 3:00
&lt;/TIMEX3&gt;
&lt;SIGNAL sid="s2"&gt; on &lt;/SIGNAL&gt;
&lt;TIMEX3 tid="t2" type="DATE value="2004-11-22"&gt;</p>
      <p>November 22, 2004 &lt;/TIMEX3&gt;.
&lt;TLINK eventInstanceID="ei1" relatedToTime="t1"
relType="IS_INCLUDED" signalID="s1"/&gt;
&lt;TLINK timeID="t1" relatedToTime="t2"
reltype="IS_INCLUDED" signalID="s2"/&gt;</p>
      <p>Among several corpora2 annotated with TimeML, TimeBank is most well-known
as it started as a proof of concept of the TimeML specifications. TimeBank 1.2 is the
most recent version of TimeBank, annotated with the TimeML 1.2.1 specification. It
contains 183 news articles and more than 61,000 non-punctuation tokens, among
which 7,935 are events.</p>
      <p>We analyzed the corpus to investigate on the distribution of PoS (Part of Speech)3
for the tokens annotated as events. As shown in Table 1, most events are expressed in
verbs and nouns. Sum of the two PoS types covers about 93% of all the event tokens,
which is split into about 65% and 28% for verb and nouns, respectively. The
percentages for cardinal numbers and adjectives are relatively small. They usually express
quantitative (e.g., “47 %”) and qualitative (e.g., “beautiful”) states. Adverbs and
2 TimeML Corpora, http://timeml.org/site/timebank/timebank.html
3 By Stanford PoS tagger, http://nlp.stanford.edu/software/tagger.shtml
prepositions indicate events when they appear in predicative phrases (e.g., “he was
here” or “he was on board”).</p>
      <p>In finding verb events automatically from the TimeBank corpus, Llorens et al.
(2010)’s work, a state-of-the-art approach, showed high effectiveness in terms of F1
(0.913). We note, however, its performance in recognizing noun events was just 0.584
in F1. This clearly indicates that noun even recognition, which is significant by itself,
is a harder problem that needs to draw more attention and research.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Related Work</title>
      <p>EVITA (Sauríet al. 2005) is the first event recognition tool for TimeML
specification. It recognizes events by combining linguistic and statistical techniques. It uses
manually encoded rules based on linguistic information as main features to recognize
events. It also uses WorldNet classes to those rules for nominal event recognition, and
checks whether the head word of noun phrase is included in the WordNet event
classes. For sense disambiguation of nouns, it utilizes a Bayesian classifier trained on the
SemCor corpus4.</p>
      <p>
        Boguraev and Ando (2007) analyzed the TimeBank corpus and presented a
machine-learning based approach for automatic TimeML events annotation. They set out
the task as a classification problem, and used a robust risk minimization (RRM)
classifier
        <xref ref-type="bibr" rid="ref15">(Zhang, Damerau, and Johnson 2002)</xref>
        to solve it. They used lexical and
morphological attributes and syntactic chunk types in bi- and tri-gram windows as features.
      </p>
      <p>
        Bethard and Martin
        <xref ref-type="bibr" rid="ref2">(Bethard and Martin 2006)</xref>
        developed a system, STEP, for
TimeML event recognition and type classification. They adopted syntactic and
semantic features, and formulated the event recognition task as classification in the
word-chunking paradigm. They used a rich set of features: textual, morphological,
syntactic dependency and some selected WordNet classes. They implemented a
Support Vector Machine (SVM) model based on those features.
      </p>
      <p>Lastly, Llorens et al. (2010) presented an evaluation on event recognition and type
classification. They added semantic roles to features, and built the Conditional
Ran4 http://www.gabormelli.com/RKB/SemCor_Corpus
dom Field (CRF) model to recognize events. They conducted experiments about the
contribution of semantic roles and CRF and reported that the CRF model improved
the performance but the effects of semantic role features were not significant. The
approach achieved 82.4% in F1 in event recognition for the TimeBank 1.2 corpus. It
is a state-of-the-art approach in TimeML event recognition and type classification.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Event Recognition</title>
      <p>The main goal of our research is to devise an effective method for recognition of
TimeML noun events. Our proposed method consists of three parts: preprocessing,
feature extraction, and classification. The preprocessing part analyzes raw text to do
tokenizing, PoS tagging, and syntactic parsing (dependency parsing). It is done by the
Stanford CoreNLP package5, which is a suite of natural language processing tools.
Then, the feature extraction part converts the preprocessed data into the feature
spaces. We explain the details of our feature extraction methods in Subsection 4.1. Finally,
the classification part determines whether the given noun is an event or not using the
MaxEnt classification algorithm.
4.1</p>
      <sec id="sec-4-1">
        <title>Feature Sets</title>
        <p>The feature sets to recognize events consist of three types: Basic Features, Lexical
Semantic Features, and Dependency-based Features. The Basic Features are based
on one of the TimeML annotation guidelines – prenominal noun is not annotated as
events –, and the Lexical Semantic Features are the lemmas and all WordNet
hypernyms of target nouns to be classified. Those hypernyms include the deep WordNet
classes indicating the specific concept of nouns. The Dependency-based Features are
adopted because syntactically related words tend to serve as important clues in
determining whether or not a noun refers to an event.</p>
        <p>Basic Features. The Basic Features include named entity (NE) tags and an indication
of whether the target noun is prenominal or not. A personal name and a geographical
location cannot be an event whereas prenominal nouns are not considered as events
according to the TimeWML annotation guideline.</p>
        <p>Lexical Semantic Features. The Lexical Semantic Features (LS) is the set of target
nouns’ lemmas and their all-depth WordNet semantic classes (i.e., hypernyms). Some
nouns have high probabilities of indicating an event when they are included in a very
specific WordNet classes. For example, a noun “drop” is always an event regardless
of its context of a sentence. While the word sense-ambiguity problem arises in
mapping a token to a synset in WordNet, we ignore the problem and simply use the
WordNet hypernyms of all the senses.
5 http://nlp.stanford.edu/software/corenlp.shtml
Dependency-based Features. We posit that nouns become events if they occur with
a certain surrounding context, namely, syntactic dependencies. We use the words and
their semantic classes related to the target noun through dependency relations. Four
dependencies we consider are: direct object (OBJ), subject (SUBJ), modifier (MOD),
and preposition (PREP).
 VB_OBJ type. A feature is formed with the governing verb, which has the OBJ
relation with the target noun, and its hypernyms. In “… delayed the game…”, for
instance, the verb “delay” can describe the temporal state of its object noun,
“game”.
 VB_SUBJ type. It is the verb that has the SUBJ relation with the target noun and
its hypernyms. For example, the verb “occur” indicates that the subject of the verb
is an event because it actually occurs as in the definition of an event.
 MOD type. It refers to the dependent words and their hypernyms in MOD relation.</p>
        <p>This feature type is based on the intuition that some modifiers such as temporal
expression reveal the noun it modifies has a temporal state and therefore is likely to
be an event.
 PREP type. This is the preposition of a noun. Some prepositions such as “before”
may indicate that the noun after them occurs at some specific time.</p>
        <p>Sometimes, Dependency-based Features need to be combined with Lexical
Semantic Features because a certain syntactic dependency may not be an absolute clue for
an event by itself but only when it co-occurs with a certain lexical or semantic aspect
of the target noun. As shown in Table 2, direct objects of “report” are not always
events (about 32% are not events in the TimeBank corpus). However, then the direct
object belongs to the WordNet process class, the target noun would be almost always
an event. In this case, therefore, we need to use a combined feature.</p>
        <p>While the three different types of features make their own contributions in
determining whether a noun is an event, their relative weights are all different. A strict
classification algorithm categorizes the target nouns based on the weighted features.</p>
        <p>
          We weight the features with Kullback-Leibler divergence (KL-divergence), which
is a non-symmetric measure of the difference between two probability distributions
          <xref ref-type="bibr" rid="ref6">(Kullback and Leibler 1951)</xref>
          and a popular weighting scheme in text mining. For a
feature f, its weight is calculated using the formula in (1) where E and ¬E are the
distributions of event and non-event term. PE( f ) and P¬E( f ) are the probabilities of f in
E and ¬E, respectively.
        </p>
        <p>W  f   KL  E E   PE  f  ln</p>
        <p>PE  f 
PE  f 
(1)</p>
        <p>Since we decided to use all the WordNet hypernyms as possible features, which
cause the feature space too large to handle, we need to select more valuable ones from
the candidate set. We use the weighing method using KL-divergence for this purpose
and selected top 104,922 features because the cut-off value empirically showed the
best performance in our preliminary experiment. We measured the performance when
we applied top-k features, and it was maximized at k = 104,922.</p>
        <p>
          For our classification algorithm, we considered four popular ones in machine
learning: Naïve Bayes, Decision Tree (C4.5), MaxEnt, and SVM algorithms. Among them,
the MaxEnt showed the best performance for our classification task. The packages we
used are Weka
          <xref ref-type="bibr" rid="ref14">(Witten, Frank, and Hall 2011)</xref>
          and Mallet machine learning tools
          <xref ref-type="bibr" rid="ref9">(McCallum 2002)</xref>
          .
5
5.1
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Experiment</title>
      <sec id="sec-5-1">
        <title>Comparison with Previous Work</title>
        <p>
          We first evaluated the proposed method by comparing the previous work, whose
result is shown in Table 3. We chose two baselines
          <xref ref-type="bibr" rid="ref2 ref7">(Bethard &amp; Martin 2006; Llorens
et al. 2010)</xref>
          that were most recent ones using the TimeBank 1.2 corpus.
        </p>
        <p>The proposed method shows an improvement of about 22% and 9% in terms of
precision and recall than the state-of-the-art, respectively, the work of Llorens et al.
Overall, the proposed method increased the F1 score by about 18% and 13%
compared to the two baselines, respectively. The evaluation was done by 5-fold cross
validation.</p>
        <p>Our classifier used only 85,518 features within the top-8 WordNet classes among
the 104,922 features mentioned in Section 4.2. In Section 5.3, we describe the
cumulative level-8 features in detail.</p>
        <p>We ran additional experiments to understand the roles of the individual feature
types. In order to show relative importance of Lexical Semantic Features (LS),
Dependency-based Features (VB_OBJ, VB_SUBJ, MOD, and PREP types), we
measured performance changes caused by excluding one feature type at a time.</p>
        <p>As shown in Table 4, VB_OBJ and MOD features are judged to be most important
because the performance was decreased most significantly. The effects of the other
features were not as great, but cannot be disregarded as they always contribute to the
overall performance increase.</p>
        <p>To investigate the effect of deep-level WordNet classes, we observed the
performance changes incurred by increasing the cumulative WordNet depth within which
features were generated. Depth fifteen, for example, means all the hypernyms of the
matched word are considered as features. The results are presented in Fig. 3.</p>
        <sec id="sec-5-1-1">
          <title>Precision</title>
        </sec>
        <sec id="sec-5-1-2">
          <title>Recall</title>
          <p>F1
# Features
1.00
0.90
0.80
e0.70
c
n0.60
a
rm0.50
fo0.40
r
eP0.30
0.20
0.10
0.00
0</p>
          <p>5 10
Cumulative WordNet Depth
15
120,000
100,000
80,000
60,000
40,000
20,000
0
s
e
r
u
t
a
e
F
f
o
#</p>
          <p>In this figure, the y-axis on the left represents the performance of event recognition
in terms of precision, recall, or F1, and the y-axis on the right shows the numbers of
features that vary when we apply the cumulative WordNet depth, which is represented
by the x-axis.</p>
          <p>Regardless of the depth of WordNet classes, the classifier reached the high
precision over 0.9, but the recall varied quite widely. Recall increased with the rise of class
depth, and it rose to the peak at top-8 level. The recall and F1-scores were 0.577 and
0.718, respectively.</p>
          <p>The number of features increased continuously up to the level 13, but stayed the
same beyond that. The number of features was 104,922, but the classifier used only
85,518 features at level 8 (where the performance was the best). From these results,
we expect that there is a proper level of ontology to recognize events, which is shown
to be level 8 in WordNet classes.
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>In this paper, we propose a TimeML noun event recognition method using
syntactic dependency and WordNet classes and show their effect using the TimeBank
collection. We chose to focus on noun events because they were recognized poorly in the
previous research although they constitute about 28% of the events. The problem of
recognizing such events was formulated as a classification task using lexical semantic
(lemma and WordNet hypernyms) and dependency-based features.</p>
      <p>Our experimental results show that the proposed method is better than the previous
approach in recognizing TimeML noun events. The performance increase in terms of
F1 measure is from 0.584 to 0.718, which we consider very significant. Through our
analysis, we arrive at the conclusion that using dependency-based features and
deeplevel WordNet classes are important for recognizing events. We also showed that
recall was increased significantly by using the hypernym features from lower depth of
the WordNet hierarchy. A performance increase in recall for event detection, mainly
due to the accurate handling of nouns and to effectiveness of the proposed
classification method, would be translated into wider coverage of event-related triples in
Semantic Web.</p>
      <p>Although the proposed method showed encouraging results compared to the
previous approaches, it still has some limitations. One issue is on the level of WordNet or
an ontology for expanding the feature set because the current method requires too
large feature space. Another one is word sense disambiguation that we ignored
entirely in the current work. Although we obtained some performance increase with deeper
levels, it’s not clear how much more gain we will get with sense disambiguation. We
are currently working on these two issues.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgment</title>
      <p>This research was supported by Basic Science Research Program through the
National Research Foundation of Korea (NRF) funded by the Ministry of Education,
Science and Technology (2011-0027292).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Allan</surname>
          </string-name>
          , James, ed.
          <source>2002. Topic Detection and Tracking: Event-based Information Organization</source>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bethard</surname>
          </string-name>
          , Steven, and
          <string-name>
            <surname>James H Martin</surname>
          </string-name>
          .
          <year>2006</year>
          . “
          <article-title>Identification of Event Mentions and Their Semantic Class</article-title>
          .”
          <source>In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <fpage>146</fpage>
          -
          <lpage>154</lpage>
          . Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Boguraev</surname>
            , Branimir, and
            <given-names>Rie</given-names>
          </string-name>
          <string-name>
            <surname>Ando</surname>
          </string-name>
          .
          <year>2007</year>
          . “
          <article-title>Effective Use of TimeBank for TimeML Analysis.” In Annotating, Extracting and Reasoning About Time</article-title>
          and Events, ed. Frank Schilder, Graham Katz, and James Pustejovsky,
          <volume>4795</volume>
          :
          <fpage>41</fpage>
          -
          <lpage>58</lpage>
          . Springer Berlin / Heidelberg. doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -75989-
          <issue>8</issue>
          _
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Fellbaum</surname>
          </string-name>
          , Christiane, ed.
          <year>1998</year>
          .
          <article-title>WordNet: An Electronic Lexical Database</article-title>
          . The MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Hobbs</surname>
            , Jerry, and
            <given-names>James</given-names>
          </string-name>
          <string-name>
            <surname>Pustejovsky</surname>
          </string-name>
          .
          <year>2003</year>
          .
          <article-title>“Annotating and Reasoning About Time</article-title>
          and Events.”
          <source>In AAAI Technical Report SS-03-05.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kullback</surname>
          </string-name>
          , Solomon, and
          <string-name>
            <surname>Richard</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Leibler</surname>
          </string-name>
          .
          <year>1951</year>
          . “On Information and Sufficiency.”
          <source>The Annals of Statistics</source>
          <volume>22</volume>
          (
          <issue>1</issue>
          ):
          <fpage>79</fpage>
          -
          <lpage>86</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Llorens</surname>
            , Hector,
            <given-names>Estela</given-names>
          </string-name>
          <string-name>
            <surname>Saquete</surname>
          </string-name>
          , and
          <string-name>
            <surname>Borja</surname>
          </string-name>
          Navarro-Colorado.
          <year>2010</year>
          .
          <article-title>“TimeML Events Recognition and Classification: Learning CRF Models with Semantic Roles</article-title>
          .”
          <source>In Proceedings of the 23rd International Conference on Computational Linguistics</source>
          ,
          <fpage>725</fpage>
          -
          <lpage>733</lpage>
          . Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>March</surname>
            , Olivia, and
            <given-names>Timothy</given-names>
          </string-name>
          <string-name>
            <surname>Baldwin</surname>
          </string-name>
          .
          <year>2008</year>
          .
          <article-title>“Automatic Event Reference Identification</article-title>
          .”
          <source>In Proceedings of the Australasian Language Technology Workshop</source>
          ,
          <volume>6</volume>
          :
          <fpage>79</fpage>
          -
          <lpage>87</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>McCallum</surname>
            ,
            <given-names>Andrew</given-names>
          </string-name>
          <string-name>
            <surname>Kachites</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>“MALLET: A Machine Learning for Language Toolkit</article-title>
          .” http://mallet.cs.umass.edu/.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Pustejovsky</surname>
            , James, JoséCastaño, Robert Ingria, Roser Saurí, Robert Gaizauskas, Andrea Setzer, and
            <given-names>Graham</given-names>
          </string-name>
          <string-name>
            <surname>Katz</surname>
          </string-name>
          .
          <year>2003</year>
          . “
          <article-title>TimeML: Robust Specification of Event and Temporal Expressions in Text</article-title>
          .”
          <source>In Proceedings of the 5th International Workshop on Computational Semantics.</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Pustejovsky</surname>
            , James, Patrick Hanks, Roser Saurí, Andrew See, Robert Gaizauskas, Andrea Setzer,
            <given-names>Dragomir</given-names>
          </string-name>
          <string-name>
            <surname>Radev</surname>
          </string-name>
          , et al.
          <year>2003</year>
          . “
          <article-title>The TIMEBANK Corpus</article-title>
          .”
          <source>In Proceedings of the Corpus Linguistics 2003 Conference</source>
          ,
          <volume>647</volume>
          -
          <fpage>656</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Pustejovsky</surname>
            , James, Robert Knippen,
            <given-names>Jessica</given-names>
          </string-name>
          <string-name>
            <surname>Littman</surname>
            , and
            <given-names>Roser</given-names>
          </string-name>
          <string-name>
            <surname>Saurí</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>“Temporal and Event Information In Natural Language Text</article-title>
          .” In Computing Meaning, ed.
          <source>Harry Bunt</source>
          , Reinhard Muskens, Lisa Matthewson, Yael Sharvit, and Thomas Ede Zimmerman,
          <volume>83</volume>
          :
          <fpage>301</fpage>
          -
          <lpage>346</lpage>
          . Springer Netherlands. doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4020</fpage>
          -5958-2_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Saurí</surname>
            , Roser, Robert Knippen,
            <given-names>Marc</given-names>
          </string-name>
          <string-name>
            <surname>Verhagen</surname>
            , and
            <given-names>James</given-names>
          </string-name>
          <string-name>
            <surname>Pustejovsky</surname>
          </string-name>
          .
          <year>2005</year>
          . “
          <article-title>Evita: a Robust Event Recognizer for QA Systems</article-title>
          .”
          <source>In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing</source>
          ,
          <fpage>700</fpage>
          -
          <lpage>707</lpage>
          .
          <article-title>Association for Computational Linguistics</article-title>
          . doi:
          <volume>10</volume>
          .3115/1220575.1220663.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Witten</surname>
          </string-name>
          , Ian H.,
          <string-name>
            <surname>Eibe</surname>
            <given-names>Frank</given-names>
          </string-name>
          , and
          <string-name>
            <given-names>Mark A.</given-names>
            <surname>Hall</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Data Mining: Practical Machine Learning Tools and Techniques</article-title>
          . 3rd ed. Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. Zhang, Tong,
          <string-name>
            <given-names>Fred</given-names>
            <surname>Damerau</surname>
          </string-name>
          , and David Johnson.
          <year>2002</year>
          . “
          <article-title>Text Chunking Based on a Generalization of Winnow</article-title>
          .”
          <source>The Journal of Machine Learning Research</source>
          <volume>2</volume>
          (March):
          <fpage>615</fpage>
          -
          <lpage>637</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>