<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">An Event Extraction System via Neural Networks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alapan</forename><surname>Kuila</surname></persName>
							<email>alapan.cse@iitkgp.ac.in</email>
							<affiliation key="aff0">
								<orgName type="institution">Indian Institute of Technology Kharagpur</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sudeshna</forename><surname>Sarkar</surname></persName>
							<email>sudeshna@cse.iitkgp.ernet.in</email>
							<affiliation key="aff1">
								<orgName type="institution">Indian Institute of Technology Kharagpur</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">An Event Extraction System via Neural Networks</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">87D592E3A48865756BD289D06D6A9784</idno>
					<idno type="DOI">10.18653/v1/P17-1038</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T03:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper we describe the IIT KGP team's participation in the Event Extraction task at FIRE 2017. We have developed an event extraction system which can extract event-phrases from tweets written in Indian language scripts along with Roman script. We designed our system on Hindi language and then used the same system for Malayalam and Tamil languages. We have submitted two systems one uses pipelined architecture another uses non-pipelined architecture. In case of pipelined architecture we first identify the tweets which contain event inside it and then extract the eventphrase from those tweets. In case of non-pipelined system all the tweets are directly pass to the event extraction system. Though conceptually simple, non-pipelined approach gives better result than pipelined approach and achieves F1-score of 50.01, 48.29 and 51.80 on Hindi, Malayalam and Tamil dataset respectively.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Event Extraction from unstructured text is one of the most important and problematic task in Information extraction and natural language processing. Event extraction deals with automatic extraction of events depicting accidents, crime, natural disasters, political events etc. from various newswires, discussion forums, social media texts. Most of the existing event extraction systems <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b13">14]</ref> deals with English texts where main objective is to detect event trigger words and to classify those trigger words among predefined event classes <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b13">14]</ref>. Though there exists several successful works for English language such as ACE, TAC 1 evaluation tracks but there is no such standard event extraction tool for Indian Languages. The Event extraction task at FIRE 2017 aims to identify and extract events from newswires and social media text specifically tweets. The tweets are written in three Indian language scripts: Hindi, Malayalam and Tamil along with romanized script. Unlike typical event extraction systems <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b13">14]</ref> where the objective is to detect the trigger words from sentences and classify the words to a predefined event types, the FIRE 2017 shared task on event extraction deals with extraction of event-phrase (which depicts any event) from the given tweets. In this paper, we present the system we developed for this event extraction task at FIRE 2017 which deals with event extraction from newswires and social media text in Indian languages.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK</head><p>Many approaches have been taken to extract events from text. Judea and Strube,2015 formulated the event extraction problem as frame-semantic parsing <ref type="bibr" target="#b3">[4]</ref>. <ref type="bibr">McClosky et al.,2011 [12]</ref> uses dependency parsing to extract events. Previously researchers use feature based approach to extract events <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b17">18]</ref>. But features are domain dependent and needs huge linguistic knowledge <ref type="bibr" target="#b14">[15]</ref>. 1 https://tac.nist.gov/2017/KBP/ To overcome the difficulties of complicated feature engineering and domain dependency, researchers use neural network approach for event classification <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b13">14]</ref>. But all these works deal with English language and principle objective of these tasks is to detect the trigger word in the text which indicate an event. Some of these papers also identify the arguments related to these event trigger and their corresponding roles in the events <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b17">18]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">TASK DEFINITION</head><p>Event extraction task at Fire 2017 requires participants to detect event-phrase from given tweets. In the training set tweets are written in three Indian languages: Hindi, Malayalam and Tamil along with romanized script. The objective is to detect the phrase within the tweet which depicts events such as natural disasters(floods, earthquakes etc), man made disasters (accidents, crime etc), political events (inaugurations by political leaders, poltical rallies etc), cultural/social events (Seminars, Conferences, light music events etc).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">DATASETS</head><p>Dataset contain tweets written in both Indian languages and Roman script. Three Indian languages are: Hindi, Malayalam and Tamil. Training dataset contains two file for each language. One file contains all the tweets obtained using the Twitter API. Another annotation file contains event phrases extracted from tweets present in previous file. Each line in the annotation file contains: tweet-id, user-id, Event phrase of the tweet, index where this phrase starts in the tweet string, string length of the event phrase. Test file contains only the tweets with corresponding tweet-id and user-id. The details of the training and test dataset is depicted in the Table <ref type="table" target="#tab_0">1</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">SYSTEM DESCRIPTION</head><p>In this section we describe our event extraction system. We have experimented with two types of event extraction systems: 1. Nonpipelined approach 2. Pipelined approach. We have used neural networks as the main technique in both the cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Preprocessing</head><p>The training file contains tweets which are written in mainly Indian language script with some Romanized script. Some of the tweets are ending with urls. To avoid data sparseness problem we have replaced all the urls with a unique token. Event annotation file contains some event phrases which are taken from same tweets and indicate same event and the words contained in those eventphrases are more or less same. We have omitted those redundant event-phrases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Run1: Non-pipelined approach</head><p>In case of non-pipelined approach we have formulated the event extraction problem as sequence labelling problem. For every token in the input tweet we have tagged the word with '0' or '1' i.e. 'outside event-phrase' or 'inside event-phrase' respectively. And for this task we have used a combination of convolution neural network <ref type="bibr" target="#b6">[7]</ref> along with bidirectional LSTM <ref type="bibr" target="#b15">[16]</ref>. In order to prepare the input to the convolution layer we have made a fixed sequence length which is same as maximum tweet length and also used padding for shorter sentences with a special token when necessary. We have used an embedding layer in the neural network to transform each token into a real valued vector <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b16">17]</ref>. And then the sequence of real valued vectors is fed to the neural network model. The main neural network architecture employed here is a combination of convolution neural network(CNN) <ref type="bibr" target="#b6">[7]</ref> followed by a bidirectional LSTM <ref type="bibr" target="#b15">[16]</ref>. Input to the convolution layer is a matrix of size n * m where n is the sequence length and m is the dimensionality of the word vector. CNN pass the input matrix representation through a convolution layer with a fixed filter length and filter size. And then without using any pooling layer we have again passed the output of the first convolution layer to the second convolution layer with another fixed filter length and size keeping the sequence length same as input sequence length. Now this internal representation is of size n * m c where m c is the dimension of internal vector representation. This internal vector representation is fed to a bidirectional LSTM with one hidden layer. The output of the bidirectional LSTM layer is followed by a softmax layer to compute the probability distribution over the possible tags of '0' or '1' for each token in the sequence.  Here we have used a convolution nueral network(CNN) based architecture for tweet classification. As the tweets are of different length so padding is applied to make them of fixed size. Now these padded sequences are fed to an embedding layer to convert the tokens into a fixed size real-valued vectors. Then the sequences of fixed size vectors are fed to a convolution layer followed by maxpooling layer. The internal representation again fed to a combination of convolution layer followed by a pooling layer. The model uses multiple filter size to get multiple features. Now the output is fed to a fully connected softmax layer which gives the probability distribution over two classes: event-tweet or without event-tweet. The performance of Tweet-classification module is reported in Table <ref type="table" target="#tab_2">2</ref>. Eventually the tweets classified as event-tweets are fed to the event extraction described in non-pipelined section. The architecture of event extraction module in pipelined approach is same as non-pipelined approach. The only difference is that in case of pipelined approach at the training time we use only those tweets which contains events. Tweets which contains no event are discarded from training data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conv layer Pooling Layer Concatination</head><p>Event extraction module will give the event span(i.e. the event phrase) within the tweets. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4">Postprocessing</head><p>The event phrase which depicts events inside a tweet consists of cosecutive word sequences. So after sequence tagging if there exist '0's inside sequence of '1's then first '1' is taken as the strating point of event-phrase and the last '1' in the sequence indicates the ending of event-phrase. All the tokens inside the boundary are cosidered as event-pharase. We use this heuristic to maintain the constraint that all the event-phrases consists of consecutive tokens.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.5">Parameters and training</head><p>Event extraction model used in pipelined and non-pipelined approach uses same architecture and hyperparameters. Regarding embeddings we have used 100 dimensions for word embedding the word embedding layer. The first convolution layer uses filter size of 3 and number of filters used m f = 30. In second convolution layer the filter size is 4 and number of filters m h = 20. The bidirectional LSTM layer uses one hidden layer with hidden layer size 60. For event classification we have used CNN based classification approach which uses word embedding of size 100. These vectors are randomly initialized and fed to the embedding layer. We have employed filter size of {3, 4, 5} with 20 filters for each filter size for the convolution operation. Finally, we have trained the neural network models using adam optimizer with suffled minibatches, dropout rate=0.5, backpropagation for gradient calculation and parameter modification.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">RESULT AND ERROR ANALYSIS</head><p>Table <ref type="table" target="#tab_3">3</ref> shows the performance of event extraction in all three languages using both pipelined and non-pipelined approach. While examining the result in each languge we have found that nonpipelined system has given better F-score than pipelined approach. In Hindi dataset Pipeline system acquire F-score of 40.35 but in non-pipelined approach the F-score is 50.01. For Malayalam the Fscore in Pipelined and non-pipelined approach are 47.17 and 48.29 respectively which are comparable. But in Tamil non-pipelined system whose F-score is 51.80 beats pipelined system (F-score: 44.01). Error propagation in pipelined approach may be responsible for this low performance of pipelined system. The performance of tweet-classification module directly influenced the event extraction system in pipelined approach. It is also obvious from the Table.</p><p>3 that the precision is very much low in both pipelined and nonpipelined system. We will investigate on our model to improve the precision score. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">CONCLUSION AND FUTURE WORK</head><p>We have taken two strategies for event extraction. In case of nonpipelined approach we have classified each word with tag '0' or '1' indicating inside event phrase or outside event-phrase. But there are many tweets which do not indicate any event. So in pipelined approach first we have detected those tweets which contain any event and then identify the span of the event inside the tweet. The accuracy of the pipelined approach depends on accuracy of the tweet classification module. So we will try to improve the performance of tweet-classification module. In our experiment the number of training tweets are very low. If more training data could be used the event extraction accuracy may increase. In future we will try to increase the performance of the event extraction system by using more training data and other advanced strategies <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b9">10]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Block diagram of pipelined apprach</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Tweet Classification Module</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Dataset description: number of tweets</figDesc><table><row><cell>Language</cell><cell cols="2">Training data No of events</cell><cell>Test</cell></row><row><cell></cell><cell></cell><cell>in annotation</cell><cell>data</cell></row><row><cell></cell><cell></cell><cell>file</cell><cell></cell></row><row><cell>Hindi</cell><cell>1024</cell><cell>402</cell><cell>4451</cell></row><row><cell>Malayalam</cell><cell>2218</cell><cell>674</cell><cell>5173</cell></row><row><cell>Tamil</cell><cell>3843</cell><cell>1109</cell><cell>5304</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>Tweet classification accuracy</figDesc><table><row><cell>Language</cell><cell>Precision(%)</cell><cell>Recall(%)</cell></row><row><cell>Hindi</cell><cell>82.92</cell><cell>64.15</cell></row><row><cell>Malayalam</cell><cell>86.08</cell><cell>62.26</cell></row><row><cell>Tamil</cell><cell>83.33</cell><cell>63.69</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 :</head><label>3</label><figDesc>Result on the final test set[P: Precision, R: Recall]</figDesc><table><row><cell>Language</cell><cell></cell><cell>Run1</cell><cell></cell><cell></cell><cell>Run2</cell></row><row><cell></cell><cell>P</cell><cell>R</cell><cell cols="2">F-sore P</cell><cell>R</cell><cell>F-score</cell></row><row><cell></cell><cell>(%)</cell><cell>(%)</cell><cell>(%)</cell><cell>(%)</cell><cell>(%)</cell><cell>(%)</cell></row><row><cell>Hindi</cell><cell cols="3">36.58 79.02 50.01</cell><cell cols="3">31.42 56.37 40.35</cell></row><row><cell cols="4">Malayalam 32.98 90.20 48.29</cell><cell cols="3">39.98 57.50 47.17</cell></row><row><cell>Tamil</cell><cell cols="3">43.16 64.77 51.80</cell><cell cols="3">39.73 49.33 44.01</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Automatically Labeled Data Generation for Large Scale Event Extraction</title>
		<author>
			<persName><forename type="first">Yubo</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shulin</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiang</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kang</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun</forename><surname>Zhao</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P17-1038</idno>
		<ptr target="https://doi.org/10.18653/v1/P17-1038" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017</title>
				<meeting>the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017<address><addrLine>Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017-07-30">2017. July 30 -August 4</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="409" to="419" />
		</imprint>
	</monogr>
	<note>Long Papers</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Event Extraction via Dynamic Multi-Pooling Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">Yubo</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Liheng</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kang</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daojian</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun</forename><surname>Zhao</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Using cross-entity inference to improve event extraction</title>
		<author>
			<persName><forename type="first">Yu</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianfeng</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bin</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianmin</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guodong</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Qiaoming</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1</title>
				<meeting>the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="1127" to="1136" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Event Extraction as Frame-Semantic Parsing</title>
		<author>
			<persName><forename type="first">Alex</forename><surname>Judea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Strube</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SEM@ NAACL-HLT</title>
				<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="159" to="164" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">A convolutional neural network for modelling sentences</title>
		<author>
			<persName><forename type="first">Nal</forename><surname>Kalchbrenner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Edward</forename><surname>Grefenstette</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Phil</forename><surname>Blunsom</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1404.2188</idno>
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Convolutional Neural Networks for Sentence Classification</title>
		<author>
			<persName><forename type="first">Yoon</forename><surname>Kim</surname></persName>
		</author>
		<ptr target="http://aclweb.org/anthology/D/D14/D14-1181.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014</title>
				<meeting>the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014<address><addrLine>Doha, Qatar</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014-10-25">2014. October 25-29, 2014</date>
			<biblScope unit="page" from="1746" to="1751" />
		</imprint>
	</monogr>
	<note>, A meeting of SIGDAT, a Special Interest Group of the ACL</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">ImageNet Classification with Deep Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">Alex</forename><surname>Krizhevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Geoffrey</forename><forename type="middle">E</forename><surname>Hinton</surname></persName>
		</author>
		<ptr target="http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 25</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Pereira</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><forename type="middle">J C</forename><surname>Burges</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Bottou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><forename type="middle">Q</forename><surname>Weinberger</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1097" to="1105" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Joint Event Extraction via Structured Prediction with Global Features</title>
		<author>
			<persName><forename type="first">Qi</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ji</forename><surname>Heng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Liang</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACL</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="73" to="82" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Using document level cross-event inference to improve event extraction</title>
		<author>
			<persName><forename type="first">Shasha</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ralph</forename><surname>Grishman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics</title>
				<meeting>the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="789" to="797" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Leveraging FrameNet to Improve Automatic Event Detection</title>
		<author>
			<persName><forename type="first">Shulin</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yubo</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shizhu</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kang</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun</forename><surname>Zhao</surname></persName>
		</author>
		<ptr target="http://aclweb.org/anthology/P/P16/P16-1201.pdf" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016</title>
				<meeting>the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016<address><addrLine>Berlin, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016-08-07">2016. August 7-12, 2016</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
	<note>Long Papers</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Exploiting Argument Information to Improve Event Detection via Supervised Attention Mechanisms</title>
		<author>
			<persName><forename type="first">Shulin</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yubo</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kang</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun</forename><surname>Zhao</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P17-1164</idno>
		<ptr target="https://doi.org/10.18653/v1/P17-1164" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</title>
		<title level="s">Long Papers</title>
		<meeting>the 55th Annual Meeting of the Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1789" to="1798" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Event Extraction As Dependency Parsing</title>
		<author>
			<persName><forename type="first">David</forename><surname>Mcclosky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mihai</forename><surname>Surdeanu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christopher</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
		<ptr target="http://dl.acm.org/citation.cfm?id=2002472.2002667" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1 (HLT &apos;11)</title>
				<meeting>the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies -Volume 1 (HLT &apos;11)<address><addrLine>Stroudsburg, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="1626" to="1635" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Linguistic regularities in continuous space word representations</title>
		<author>
			<persName><forename type="first">Tomas</forename><surname>Mikolov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wen-Tau</forename><surname>Yih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Geoffrey</forename><surname>Zweig</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Joint Event Extraction via Recurrent Neural Networks</title>
		<author>
			<persName><forename type="first">Thien</forename><surname>Huu Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kyunghyun</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ralph</forename><surname>Grishman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">HLT-NAACL</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="300" to="309" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Event Detection and Domain Adaptation with Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">Huu</forename><surname>Thien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ralph</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><surname>Grishman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Bidirectional Recurrent Neural Networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Schuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">K</forename><surname>Paliwal</surname></persName>
		</author>
		<idno type="DOI">10.1109/78.650093</idno>
		<ptr target="https://doi.org/10.1109/78.650093" />
	</analytic>
	<monogr>
		<title level="j">Trans. Sig. Proc</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="2673" to="2681" />
			<date type="published" when="1997-11">1997. nov 1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Word representations: a simple and general method for semi-supervised learning</title>
		<author>
			<persName><forename type="first">Joseph</forename><surname>Turian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lev</forename><surname>Ratinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoshua</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 48th annual meeting of the association for computational linguistics. Association for Computational Linguistics</title>
				<meeting>the 48th annual meeting of the association for computational linguistics. Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="384" to="394" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Joint Extraction of Events and Entities within a Document Context</title>
		<author>
			<persName><forename type="first">Bishan</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tom</forename><forename type="middle">M</forename><surname>Mitchell</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1609.03632</idno>
		<ptr target="http://arxiv.org/abs/1609.03632" />
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
