<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Query Labelling for Indic Languages using a hybrid approach</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Rupal</forename><surname>Bhargava</surname></persName>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of Computer Science</orgName>
								<orgName type="department" key="dep2">Information Systems Birla Institute of Technology &amp; Science</orgName>
								<orgName type="institution">Pilani</orgName>
								<address>
									<addrLine>Pilani Campus { 1 rupal.bhargava, 2 yash</addrLine>
									<postCode>3 f2012493</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yashvardhan</forename><surname>Sharma</surname></persName>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of Computer Science</orgName>
								<orgName type="department" key="dep2">Information Systems Birla Institute of Technology &amp; Science</orgName>
								<orgName type="institution">Pilani</orgName>
								<address>
									<addrLine>Pilani Campus { 1 rupal.bhargava, 2 yash</addrLine>
									<postCode>3 f2012493</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Shubham</forename><surname>Sharma</surname></persName>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of Computer Science</orgName>
								<orgName type="department" key="dep2">Information Systems Birla Institute of Technology &amp; Science</orgName>
								<orgName type="institution">Pilani</orgName>
								<address>
									<addrLine>Pilani Campus { 1 rupal.bhargava, 2 yash</addrLine>
									<postCode>3 f2012493</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Abhinav</forename><surname>Baid</surname></persName>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of Computer Science</orgName>
								<orgName type="department" key="dep2">Information Systems Birla Institute of Technology &amp; Science</orgName>
								<orgName type="institution">Pilani</orgName>
								<address>
									<addrLine>Pilani Campus { 1 rupal.bhargava, 2 yash</addrLine>
									<postCode>3 f2012493</postCode>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Query Labelling for Indic Languages using a hybrid approach</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">92AC2299DC903B0A1E715DB9A842DF09</idno>
					<idno type="DOI">10.1145/2600428.2609622</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T13:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Transliteration</term>
					<term>Natural Language Processing</term>
					<term>Language Identification</term>
					<term>Machine Learning</term>
					<term>Logistic Regression</term>
					<term>Information Retrieval</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>With a boom in the internet, social media text has been increasing day by day. Much of the user generated content on internet is written in a very informal way. Usually people tend to write text on social media using indigenous script. To understand a script different from ours is a difficult task. Moreover, nowadays queries received by the search engines are large number of transliterated text. Hence providing a common platform to deal with the problem of transliterated text becomes really important. This paper presents our approach to handle labeling of queries as part of the FIRE2015 shared task on Mixed-Script Information Retrieval. Tokens in the query are labeled on basis of a hybrid approach which involves rule based and machine learning techniques. Each annotation has been dealt separately but sequentially.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">INTRODUCTION</head><p>There are a large number of indigenous scripts in the world that are widely used. By indigenous scripts, we are referring to any language written in a script that is not Roman. Due to technological reasons such as a lack of standard keyboards for non-Roman script, the popularity of the QWERTY keyboard and familiarity with the English language, much of the user generated content on the internet is written in transliterated form. Transliteration is the process of phonetically representing the words of a language in a non-native script. For example, many times to represent a colloquialism such as (Okay) in Hindi, users will write their transliterated form <ref type="bibr" target="#b1">[1]</ref>. Search engines get a large number of transliterated search queries dailythe challenge in processing these queries is the spelling variation of the transliterated form of these search queries. For example the Hindi word can be written as 'khana', 'khaana', 'khaanna', and so on. This particular problem involves the following: (1) Taking care of spelling variations due to transliteration and (2) Forward/Backward transliteration. Similarly, with the rise in the use of social media, there has been a corresponding increase in the use of hashtags, emoticons and abbreviations. So, along with identification of languages, these need to be recognized as well. Also, named entities should be considered separately <ref type="bibr" target="#b2">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">SUBTASK 1: QUERY WORD LABELING</head><p>Suppose that q: w1 w2 w3 … wn, is a query written in the Roman script. The words, w1 w2 etc., could be standard English words or 1 http://www.ark.cs.cmu.edu/TweetNLP/ transliterated from another language L = {Bengali (bn), Gujarati (gu), Hindi (hi), Kannada (kn), Malayalam (ml), Marathi (mr), Tamil (ta), Telugu (te) }. The task is to label the words as en or a member of L depending on whether it is an English word, or a transliterated L-language word. Further Named Entity (NE) recognition and identification of mixed language words (MIX) and Punctuation (X) also had to be carried out.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">PROPOSED TECHNIQUE</head><p>Our system reads the input file and separates them into tokens.</p><p>After identification of all the tags, an output is generated for the same. We collected more data for Gujarati and Hindi from previous year's Microsoft FIRE event for the training purposes. Logistic regression was used to train each language individually. Feature set used for the same included unigram and bigram character index with unigram contributing the most in our opinion. Rule based approach was used for combining the individual language classifiers, based on the probability obtained. For other annotations, the process is explained as follows in their respective stages.</p><p>The token identification (X, NE, Mix etc.) is done in a pipelined manner. The 4 stages of the pipeline are:</p><p>1. Identification of Punctuation (X): The tag X encompasses all forms of punctuation, numerals, emoticons, mentions, hashtags and acronyms. This stage can further be divided into 2 parts done sequentiallyidentification of emoticons, hashtags, etc. and identification of abbreviations.</p><p>a. Identification of hashtags, emoticons, etc.: This is done using the CMU Ark tagger 1 with a training model especially designed for social media text. The tagging model is a first-order maximum entropy Markov model (MEMM), a discriminative sequence model for which training and decoding are extremely efficient <ref type="bibr" target="#b4">[4]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>b. Identification of abbreviations:</head><p>A dictionary based approach is used for this purpose. A list of around 1400 commonly used abbreviations in SMS language was built and the word was marked as X if it occurred in this list.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Identification of Named Entities (NE):</head><p>Named entities were also identified using a dictionary based approach.</p><p>The training data was used to create the dictionary of Named entities because the data was insufficient to run a machine learning algorithm. The number of named entities was 2414. The number of Named Entities was too low and the multi-language nature of the dataset made it hard to characterize words as NE with certainty.</p><p>For example, in English language named entities occur in certain manner at certain positions according to sentence structure. But when it comes to multi lingual sentences, sentence structure varies a lot.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Identification of Language:</head><p>For language detection, the classifier was built using Logistic Regression with feature vectors containing character unigrams and bigrams <ref type="bibr" target="#b3">[3]</ref>.</p><p>4. Identification of mixed words (MIX): Finally, a rule based approach was adopted for identifying mixed words in the utterances. If the 2 maximum language probabilities in the list generated in the previous stage are close to each other, then the word was classified as MIX. The threshold for detecting MIX words was determined empirically. The threshold was 0.05 with word length greater than 8. It was determined empirically by setting it at different values and manually evaluating the output.</p><p>If there is a match in stages 1 or 2 of the pipeline, then the token is immediately abbreviated and no further stages are implemented on that word. Otherwise, the token passes through stages 3 and 4 above so that the final tag can be determined.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">EXPERIMENTS AND RESULTS</head><p>We used the data given to us which included labeled utterances from social media and blogs to build our training data set. We submitted three runs, where we used char 1, 2 -grams as features.</p><p>We manually removed a few words from the named entity list in run 2. In run 3, mixed word detection was enabled; it was disabled in the other runs to avoid accuracy from going down to due to false positives. Our training data consisted of 41882 words including all languages and named entities. The training data set was built as a dense model i.e. data is represented using 0 for those features that are not present in the word, and 1 for those that are present, with the feature vector containing 712 entries per word corresponding to each possible character 1-gram and 2gram. A separate model was built for each language containing an equal number of words in the language and words not in the language. We used the scikit-learn toolkit 1 for machine learning <ref type="bibr" target="#b5">[5]</ref>. For language identification, we tried linear regression, naïve Bayes and Logistic Regression classifier.</p><p>We used an 80-20 split of the training data to test the performance of our system for cross validation on our test set. The results (shown in table <ref type="table" target="#tab_0">1</ref>) obtained using the evaluation script for our individual classifiers were: The result calculated above were evaluated using the script provided. The results showed clearly that the individual classifiers were pretty good. We decided to use a linear kernel for logistic regression as it was giving the highest accuracy. We tried out different parameters and choose the configuration most optimal for our training data. As shown in Table <ref type="table" target="#tab_2">3</ref> our overall Weighted F-Measure was 56.7%. Also, our standard deviation was close to 10% error margin.</p><p>In addition there was a direct correlation in the results between the precision and the training data sizes used. The number of words for the different languages in the training data was 3509 (bn), 17392 (en), 744 (gu), 4237 (hi), 1520 (kn), 1126 (ml), 1868 (mr), 3116 (ta) and 5960 (te).</p><p>As shown in Table <ref type="table" target="#tab_1">2</ref>, Languages like English for which the training data size was larger gave around 72% f-Measure and 87% recall with 61% precision, while Gujarati which had very less training data gave 17% precision. We did better on the weighted F-Measure statistic because the languages with less training data were also the ones least represented in the test data. As such weighted evaluation of the language predictor gave us around 56% F-Measure.</p><p>Named Entity recognition was done based on a lookup based method that would classify words as named entities in the test set if they were found in the training set. This was done because the training set for named entities was too small to use a machinelearned Named Entity Recognizer. The results obtained by our approached reaffirmed that our approach was correct.</p><p>It was observed that the Language Predictor developed based on our approach inaccurately predicted on testing data due to the small training data. The precisions of our individual classifiers and the official results for English, Bengali, and Tamil back our claim.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">CONCLUSION AND FUTURE WORK</head><p>In this paper, we discussed the n-gram approach to identify the language of a word. The context cues of the word could be used to identify the language instead of only relying on character unigrams and bigrams. A future work could be to implement a sequence based classifier that would classify the word based on the previous and the next word. Instead of using only unigrams and bigrams, the system could be improvised to use {1, 2, 3, 4, 5}grams based on different machine learning algorithms such as MaxEnt, Naïve Bayes, Logistic regression, SVM, etc. Our Named Entity recognizer was prone to errors due to insufficient data.</p><p>Similarly, the accuracy of our system could be improved by training it on more data. However, X tokens were identified with a reasonable accuracy.</p><p>Tagging of MIX words could also be improved by using better thresholds.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 : Language wise Precision for different classifiers on test data from the 80-20 split</head><label>1</label><figDesc></figDesc><table><row><cell>1 http://scikit-learn.org/stable/</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 : Official language wise F-Measure, Precision, Recall</head><label>2</label><figDesc></figDesc><table><row><cell>Language</cell><cell>F-Measure</cell><cell>Precision</cell><cell>Recall</cell></row><row><cell>X</cell><cell>0.8237</cell><cell>0.8963</cell><cell>0.7619</cell></row><row><cell>br</cell><cell>0.4803</cell><cell>0.4327</cell><cell>0.5397</cell></row><row><cell>en</cell><cell>0.7214</cell><cell>0.6171</cell><cell>0.8683</cell></row><row><cell>gu</cell><cell>0.0849</cell><cell>0.1784</cell><cell>0.0557</cell></row><row><cell>hi</cell><cell>0.3853</cell><cell>0.3473</cell><cell>0.4326</cell></row><row><cell>kn</cell><cell>0.4038</cell><cell>0.4281</cell><cell>0.3821</cell></row><row><cell>ml</cell><cell>0.297</cell><cell>0.3896</cell><cell>0.24</cell></row><row><cell>mr</cell><cell>0.3141</cell><cell>0.3899</cell><cell>0.263</cell></row><row><cell>ta</cell><cell>0.5365</cell><cell>0.6501</cell><cell>0.4567</cell></row><row><cell>te</cell><cell>0.3444</cell><cell>0.3473</cell><cell>0.3415</cell></row><row><cell cols="2">Our overall performance was:</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3 : Weighted F-Measure and token accuracy for the three runs.</head><label>3</label><figDesc></figDesc><table><row><cell>tokens</cell><cell>11999</cell><cell>11999</cell><cell>11999</cell></row><row><cell>tokens</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Correct</cell><cell>6576</cell><cell>6575</cell><cell>6574</cell></row><row><cell>Weighted</cell><cell></cell><cell></cell><cell></cell></row><row><cell>FMeasure</cell><cell>0.567742</cell><cell cols="2">0.56769 0.567615851</cell></row><row><cell>tokens</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Accuracy</cell><cell>54.8046</cell><cell>54.7962</cell><cell>54.7879</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Labeling the Languages of Words in Mixed-Language Documents using Weakly Supervised Methods</title>
		<author>
			<persName><forename type="first">Ben</forename><surname>King</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Steven</forename><forename type="middle">P</forename><surname>Abney</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>HLT-NAACL</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Query expansion for mixed-script information retrieval</title>
		<author>
			<persName><forename type="first">Parth</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kalika</forename><surname>Bali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rafael</forename><forename type="middle">E</forename><surname>Banchs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Monojit</forename><surname>Choudhury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
		</author>
		<idno type="DOI">10.1145/2600428.2609622</idno>
		<ptr target="http://dx.doi.org/10.1145/2600428.2609622" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 37th international ACM SIGIR conference on Research &amp; development in information retrieval (SIGIR &apos;14)</title>
				<meeting>the 37th international ACM SIGIR conference on Research &amp; development in information retrieval (SIGIR &apos;14)<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="677" to="686" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Ye word kis lang ka hai bhai?&quot; Testing the Limits of Word level Language Identification</title>
		<author>
			<persName><forename type="first">Kalika</forename><surname>Spandana Gella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Monojit</forename><surname>Bali</surname></persName>
		</author>
		<author>
			<persName><surname>Choudhury</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eleventh International Conference on Natural Language Processing (ICON 2014)</title>
				<meeting>the Eleventh International Conference on Natural Language Processing (ICON 2014)<address><addrLine>Goa, India</addrLine></address></meeting>
		<imprint/>
	</monogr>
	<note>To appear</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Improved part-ofspeech tagging for online conversational text with word clusters</title>
		<author>
			<persName><forename type="first">Olutobi</forename><surname>Owoputi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O'</forename><surname>Connor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brendan</forename><surname>Dyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chris</forename><surname>Gimpel</surname></persName>
		</author>
		<author>
			<persName><surname>Kevin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nathan</forename><surname>Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Noah</forename><forename type="middle">A</forename><surname>Smith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Paper presented at the meeting of the Proceedings of NAACLHLT</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Scikit-learn: Machine Learning in Python</title>
		<author>
			<persName><surname>Pedregosa</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">JMLR</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="2825" to="2830" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
