<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">EventXtract-IL: Event Extraction from Social Media Text in Indian Languages @ FIRE 2017 -An Overview</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Pattabhi</forename><surname>Rk</surname></persName>
							<email>pattabhi@au-kbc.org</email>
						</author>
						<author>
							<persName><forename type="first">Sobha</forename><surname>Lalitha</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">KBC Research Centre</orgName>
								<orgName type="institution">AU</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution">MIT Campus of Anna University</orgName>
								<address>
									<postCode>+91 44 22232711</postCode>
									<settlement>Chrompet, Chennai</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">EventXtract-IL: Event Extraction from Social Media Text in Indian Languages @ FIRE 2017 -An Overview</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">198F2387A1A2AA4D79063D1CC1978EC5</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T03:15+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>CCS Concepts</term>
					<term>Computing methodologies ~ Artificial intelligence</term>
					<term>Computing methodologies ~ Natural language processing</term>
					<term>Information systems ~ Information extraction Event Extraction</term>
					<term>Social Media Text</term>
					<term>Twitter</term>
					<term>Indian Languages</term>
					<term>Tamil</term>
					<term>Hindi</term>
					<term>Malayalam</term>
					<term>Event Annotated Corpora for Indian Language data</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Today through social media platforms the communication has become exceptionally fast that people across the world get to know any event happening at the nook and corner of the world in a fraction of a second. The penetration of smart phones, tabs etc has significantly changed the way people communicate. Facebook and Twitter are two most popular social media platforms, where people post about events, their personal daily activities and plans. And also post their thoughts, responses or reactions for any public cause or issue. In the recent times we have seen how the facebook posts and twitter tweets have helped in mobilizing people in states such as Tamil Nadu (TN) and Jammu &amp; Kashmir (J&amp;K) in India. The mass public protests for the "Jallikattu" event in TN and stone pelting protests in J&amp;K are prominent examples of how social media has impacted the common man. The information about events or happenings in real time is very valuable to the administration for disaster management, crowd control, public alerting. These information which is used in the development of recommender systems adds value for the growth of business enterprises. Thus there is great need to develop automatic systems for automatic event extraction. This paper presents the overview of the task "Event extraction in Indian languages", a track in FIRE 2017. The task of this track is to extract events from the social media text, The Twitter. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as "gr8" for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags and v) Code mixing, though in this track, we have not considered code mixing. Though event extraction from Indian language texts is gaining attention among Indian research community, however there is no benchmark data available for testing the systems. Hence we have organized the Event Extraction in social media text track for Indian languages (EventXtract-IL) in the Forum for Information Retrieval Evaluation (FIRE). The paper describes the corpus created for three languages, viz., Hindi, Malayalam and Tamil and present the overview of the approaches used by the participants.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>Over the past decade, Indian language content on various media types such as websites, blogs, email, chats has increased significantly and it is observed that with the advent of smart phones more people are using social media such as twitter, facebook to comment on people, products, services, organizations, governments, etc. Thus it is seen that content growth is driven by people from non-metros and small cities who generally are comfortable with their own mother tongue rather than English. The growth of Indian language content is expected to increase by more than 70% every year. Hence there is a great need to process these data automatically. This requires natural language processing software systems which extracts events, entities or the associations of them. Thus an automatic Event extraction system is required.</p><p>The objectives of the evaluation are: where users use only one language. But there are no such shared task for event identification and Extraction. Thus there is a need to develop systems that focus on social media texts for event extraction.</p><formula xml:id="formula_0"></formula><p>The paper is organized as follows: section 2 describes the challenges in event extraction on Indian languages. Section 3 describes the corpus annotation, the tag set and corpus statistics.</p><p>In section 4 the overview of the approaches used by the participants are described and section 5 concludes the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">GENERAL CHALLENGES IN INDIAN LANGUAGE EVENT EXTRACTION</head><p>The challenges in the development of event extraction systems for Indian languages from social media text arise due to several factors. One of the main factors being there is no annotated data available for any of the Indian languages, Apart from the lack of annotated data, the other factors which differentiate Indian languages from other European languages are the following: a) Ambiguity -Ambiguity between common and proper nouns. Eg: common words such as "Roja" meaning Rose flower is a name of a person. b) Spell variations -One of the major challenges is that different people spell the same entity differently. For example: In Tamil person name -Roja is spelt as "rosa", "roja". c) Less Resources -Most of the Indian languages are less resource languages. There are no automated tools available to perform preprocessing tasks required for NER such as part-of-speech tagging, chunking which can handle social media text.</p><p>Apart from these challenges we also find that development of automatic event recognition systems is difficult due to following reasons:</p><p>i) Tweets contain a huge range of distinct event types. Almost all these types are relatively infrequent, so even a large sample of manually annotated tweets will contain very few training examples.</p><p>ii) In comparison with English, Indian Languages have more dialectal variations. These dialects are mainly influenced by different regions and communities.</p><p>iii) Indian Language tweets are multilingual in nature and predominantly contain English words.</p><p>The following examples illustrate the usage of English words and spoken, dialectal forms in the tweets. This example is a Tamil tweet where it is written in a particular dialect and also has usage of English words.</p><p>Similarly in Hindi we find lot of spell variations. Such as for the words "mumbai", "gaandhi", "sambandh", "thanda" there are atleast three different spelling variations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">CORPUS DESCRIPTION</head><p>The corpus was collected using the twitter API in two different time periods. The training partition of the corpus was collected during June 2017. And the test partition of the corpus was collected during Aug 2017. As explained in the above sections, in the twitter data we observe concept drift. Thus to evaluate how the systems handle concept drift we had collected data in two different time periods. In this present initiative the corpus is available for three Indian languages Hindi, Malayalam and Tamil.</p><p>The Tables and figures show different aspects of corpus statistics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ANNOTATION TAGSET</head><p>The corpus for each language was annotated manually by trained experts. Event Extraction task requires to identify event trigger keyword and the full event predicate and represent it with a tag. In this work, the data is tagged with one single tag "Event" where a single phrase consisting of Event trigger and the event predicate. For example "Governor for Tamil Nadu appointed". We find that in most of the works in Event extraction in English, Automatic Content Extraction (ACE) Event tag set has been used. In the present work for this track we have only focused on just the extraction one event phrase, which consists of the Even trigger and the whole event predicate which gives the information of where and when the event has happened and who all participants involved in the event. As there is no much work in this area in Indian languages, and to keep the task definition simple, in this edition we have not taken identification of event types, where and who of the events individually.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>DATA FORMAT</head><p>The participants were provided the data with annotation markup in a separate file called annotation file. The raw tweets were to be separately downloaded using the twitter API. The data has events from different types such as cyclones, floods, accidents, disease outbreak and political events. And the majority of the types were the disasters and political events such inaugurations/opening ceremonies by political leaders. Also the data had events on movie or audio release functions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">SUBMISSION OVERVIEWS</head><p>The evaluation metrics used for this task is Precision, Recall and F-measure, which is the widely used metric for this task. A total of 16 teams registered for participating in the track. The final submission was done by 4 teams among the 16 teams. They submitted their test runs for evaluation with multiple runs. A total of 11 test runs were submitted for evaluation. Only 1 team had participated for all the three languages. One teams each participated for Hindi, Tamil and Malayalam.</p><p>We had developed a base system without using any preprocessing and lexical resources. The base line system was developed using a CRF classifier which will mark if a word is part of an event phrase or not. The base line system was developed so that it would help in making a better comparative study. The system performance is: precision of 23.87% and recall of 29.67%.</p><p>It is observed that all the teams outperformed the base system. In the following paragraphs we briefly describe the approaches used by each team. The results of the teams are given in Table <ref type="table">3</ref>.</p><p>a) Alapan team had used Neural Networks, to develop the system. They had used CNN algorithm in combination with LSTM. They first remove the URLs, emoticons etc from tweets. There is no NLP pre-processing such as POS and Chunking done to the tweets. This team participated in all languages and had submitted 2 runs each for each language.</p><p>b) Sharmila team used SVMs for developing the system. The data was preprocessed for tokenization and no cleaning is performed. The task is modeled as simple binary classification task. The team submitted participated for Tamil and submitted three runs. c) Nageshbhattu team used CRFs for the task. This team pre-processed the data for Part-of-Speech (POS) tagging. They have used POS tags and words in the Window of 5 as features for the CRFs learning. One interesting aspect is that the POS tagger for the general texts has been used for the Tweet data. It will be interesting to know how well a general Newswire POS engine performs on Tweet data. This team participated for Hindi and submitted one run.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>d) Manju team used an open source tool called</head><p>BeautifulSoap to identify the events. This tool is used for website scrapping but here they have used for event classification. The choice of the tool is not appropriate for this task. Infact this method can be said as a "blind mrthod", where almost all the input tweets are marked as events, and by default 1/5th of it has come out correct. This team participated in Malayalam and submitted one run.</p><p>The different methodologies used by the teams are summarized in Table <ref type="table">2</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Evaluation</head><p>Evaluation metrics used are precision, recall and f-measure. All the systems have been evaluated automatically by comparing with the gold data. The results obtained for each participant is shown in table <ref type="table">3</ref>.</p><p>One main condition in the Event phrase identification is related to the event span. The span or extent of the Event phrase is to be optimally minimum, it should include Event trigger and the Predicate. Consider the example below Hi: bahut dinom se kahi jA rahi rAjyapAl ki niyukti, tamilnadu me naye rAjyapAl ki niyukti huA.</p><p>Here the event phrase is "tamilnadu me naye rAjyapAl ki niyukti". It can not be just "rAjyapAl ki niyukti".Here the event trigger is "niyukti". The event predicate is " tamilnadu me naye rAjyapAl", from which we get the information where and what. So the participating system need to identify this exact event phrase. Any system output which has tagged anything more than this extent is considered as wrong.</p><p>Thus we define: Precision,P=(No. Correctly identified Events by the system)/(Total No. of Events identified by the system) Recall, R=(No. Correctly identified Events by the system)/(Total No.of Events identified in the Gold) F-measure= (2*P*R)/(P+R)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5.</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CONCLUSION</head><p>The main objective of creating benchmark data representing a few of the popular Indian languages has been achieved. And this data has been made available to research community for free for research purposes. The data is user generated data and is not any genre specific. Efforts are still going on to standardize this data and make it perfect data set for future researchers. We observe that the results obtained are almost similar for all the languages. We hope to see more publications in this area in the coming days from these different research groups who could not submit their results. Also we expect more groups would start using this data for their research work. This EventXtract-IL track is one of the first efforts towards creation of Event annotated user generated data for Indian languages. The data being generic, this could be used for developing generic systems upon which a domain specific system could be built after customization. In the next edition of this track we plan to add more data and also include identification and extraction of event types, event cause-effects and event participants.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>6.</head><p>Table <ref type="table">2</ref>. Participant Team Overview -Summary </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Example 1 (</head><label>1</label><figDesc>Tamil): Ta: Stamp veliyittu ivaga ativaangi ….. En: stamp released these_people get_beaten …. Ta: othavaangi …. kadasiya &lt;loc&gt;kovai&lt;/loc&gt; En: get_slapped … at_end kovai Ta: pooyi pallakaatti kuththu vaangiyaachchu. En: gone show_tooth punch got ("They released stamp, got slapping and beating … at the end reached Kovai and got punched on the face")</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 .</head><label>1</label><figDesc>Index column is the starting character position of the Event string calculated for each tweet and the count starts from '0'. The participants were also instructed to provide the test file annotations in the same format as given for the training data. The dataset statistics is as follows: Corpus Statistics</figDesc><table><row><cell>The annotation file</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>We thank the FIRE 2017 organizers for giving us the opportunity to conduct the evaluation exercise. We also thank the Language Editors in CLRG, AU-KBC Research Centre.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>7.</head></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0" />			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Arkaitz</forename><surname>Zubiaga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">San</forename><surname>Iñaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pablo</forename><surname>Vicente</surname></persName>
		</author>
		<author>
			<persName><forename type="first">José</forename><surname>Gamallo</surname></persName>
		</author>
		<author>
			<persName><surname>Ramom Pichel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Iñaki</forename><surname>Campos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nora</forename><surname>Alegría Loinaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aitzol</forename><surname>Aranberri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Víctor</forename><surname>Ezeiza</surname></persName>
		</author>
		<author>
			<persName><surname>Fresno</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<idno>CEUR-WS.org 2014</idno>
		<title level="m">CEUR Workshop Proceedings 1228</title>
				<meeting><address><addrLine>Girona, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014-09-16">September 16th, 2014</date>
		</imprint>
	</monogr>
	<note>TweetLID@SEPLN 2014</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">We&apos;re not in kansas anymore: detecting domainchanges in streams</title>
		<author>
			<persName><forename type="first">Mark</forename><surname>Dredze</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tim</forename><surname>Oates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christine</forename><surname>Piatko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2010 Conferenceon Empirical Methods in Natural LanguageProcessing</title>
				<meeting>the 2010 Conferenceon Empirical Methods in Natural LanguageProcessing</meeting>
		<imprint>
			<publisher>Association for ComputationalLinguistics</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="585" to="595" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Crowdsourcing and annotating ner for twitter#drift</title>
		<author>
			<persName><forename type="first">Hege</forename><surname>Fromreide</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dirk</forename><surname>Hovy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anders</forename><surname>Søgaard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">European language resources distributionagency</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Case Study on Inter-Annotator Agreement for Word Sense Disambiguation</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">Y</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Foo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the {ACL} {SIGLEX} Workshop on Standardizing Lexical Resources {(SIGLEX99)}</title>
				<meeting>the {ACL} {SIGLEX} Workshop on Standardizing Lexical Resources {(SIGLEX99)}<address><addrLine>Maryland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="9" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">Preslav</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Torsten</forename><surname>Zesch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><surname>Cer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Jurgens</surname></persName>
		</author>
		<title level="m">Proceedings of the 9th International Workshop on Semantic Evaluation</title>
				<meeting>the 9th International Workshop on Semantic Evaluation<address><addrLine>SemEval</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">SemEval-2013 Task 2: Sentiment Analysis in Twitter</title>
		<author>
			<persName><forename type="first">Preslav</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sara</forename><surname>Rosenthal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zornitsa</forename><surname>Kozareva</surname></persName>
		</author>
		<author>
			<persName><surname>Stoyanov</surname></persName>
		</author>
		<author>
			<persName><surname>Veselin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alan</forename><surname>Ritter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Theresa</forename><surname>Wilson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Seventh International Workshop on Semantic Evaluation</title>
				<meeting>the Seventh International Workshop on Semantic Evaluation<address><addrLine>SemEval</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013. 2013</date>
			<biblScope unit="volume">2</biblScope>
		</imprint>
	</monogr>
	<note>Second Joint Conference on Lexical and Computational Semantics (*SEM)</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">Rajeev</forename><surname>Sangal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Malik</surname></persName>
		</author>
		<title level="m">Proceedings of the 1st Workshop on South and Southeast Asian Natural Language Processing (SANLP)</title>
				<meeting>the 1st Workshop on South and Southeast Asian Natural Language Processing (SANLP)</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Aravind</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><surname>Malik</surname></persName>
		</author>
		<ptr target="http://www.aclweb.org/anthology/W10-36" />
		<title level="m">Proceedings of the 1st Workshop on South and Southeast Asian Natural Language Processing (SANLP)</title>
				<meeting>the 1st Workshop on South and Southeast Asian Natural Language Processing (SANLP)</meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">Rajeev</forename><surname>Sangal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dipti</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Misra</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anil</forename><forename type="middle">Kumar</forename><surname>Singh</surname></persName>
		</author>
		<ptr target="http://www.aclweb.org/anthology/I/I08/I08-03" />
		<title level="m">Proceedings of the IJCNLP-08 Workshop on Named Entity Recognition for South and South East Asian Languages</title>
				<meeting>the IJCNLP-08 Workshop on Named Entity Recognition for South and South East Asian Languages</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><surname>Pattabhi Rk Rao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vijay</forename><surname>Malarkodi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sundar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sobha</forename><surname>Lalitha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Devi</forename></persName>
		</author>
		<ptr target="http://au-kbc.org/nlp/NER-FIRE2014/" />
		<title level="m">Proceedings of Named-Entity Recognition Indian Languages track at FIRE 2014</title>
				<meeting>Named-Entity Recognition Indian Languages track at FIRE 2014</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
