<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">U O U P V 2 at HAHA 2019: BiGRU Neural Network Informed with Linguistic Features for Humor Recognition</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Reynier</forename><surname>Ortega-Bueno</surname></persName>
							<email>reynier.ortega@cerpamid.co.cu</email>
							<affiliation key="aff0">
								<orgName type="department">Center for Pattern Recognition and Data Mining</orgName>
								<orgName type="institution">Universidad de Oriente</orgName>
								<address>
									<settlement>Santiago de Cuba</settlement>
									<country key="CU">Cuba</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
							<email>prosso@dsic.upv.es</email>
							<affiliation key="aff1">
								<orgName type="department">PRHLT Research Center</orgName>
								<orgName type="institution">Universitat Politècnica de València</orgName>
								<address>
									<settlement>Valencia</settlement>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">José</forename><forename type="middle">E</forename><surname>Medina Pagola</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">University of Informatics Sciences</orgName>
								<address>
									<settlement>Havana</settlement>
									<country key="CU">Cuba</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">U O U P V 2 at HAHA 2019: BiGRU Neural Network Informed with Linguistic Features for Humor Recognition</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">8879AAC70518AD635472D1F499C57418</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T16:57+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Spanish Humor Classification</term>
					<term>BiGRU Neural Network</term>
					<term>Social Media</term>
					<term>Linguistic Features</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Verbal humor is an illustrative example of how humans use creative language to produce funny content. We, as human being, access to humor or comicality with the purpose of projecting more complex meanings which, usually, represent a real challenge, not only for computers, but for humans as well. For that, understanding and recognizing humorous content automatically has been and continue being an important issue in Natural Language Processing (NLP) and even more in Cognitive Computing. In order to addressing this challenge, in this paper we describe our U O U P V2 system developed for participating in the second edition of the HAHA (Humor Analysis based on Human Annotation) task proposed at IberLEF 2019 Forum. Our starting point was the UO UPV system we participated in HAHA 2018 with some modification in its architecture. This year we explored other way to inform our Attention based Recurrent Neural Network model with linguistic knowledge. Experimental results show that our system achieves positive results ranked 7th out of 18 teams.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Natural language systems have to deal with many problems related with texts comprehension, but these problems become very hard when creativity and figurative devices are used in verbal and written communication. Human can easily understand the underlying meaning of such texts but, for a computer to disentangle the meaning of creative expressions such as irony and humor, it requires much additional knowledge, and complex methods of reasoning. model. The system works with an attention layer which is applied at the top of a BiGRU to generate a context vector for each word embedding which is then fed to another BIGRU network. Finally, the learned representation is fed to a Feed Forward Network (FNN) to classify whether the tweet is humorous or not. Motivated by the results shown in <ref type="bibr" target="#b16">[17]</ref>, we explore to incorporate the linguistic information to the model through initial hidden state in the first BiGRU layer.</p><p>The paper is organized as follows. Section 2 presents a brief description of the HAHA task. Section 3 introduces our system for humor detection. Experimental results are subsequently discussed in Section 4. Finally, in Section 5 we present our conclusions and attractive directions for future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">HAHA Task and Dataset</head><p>HAHA 2019 is the second edition of the first shared task that addresses the problem of recognizing humor in Spanish tweets. Similar to the first edition, in the HAHA 2019 task, two subtasks were proposed. The first one, "Humor Detection", aims at predicting whether a tweet is a joke or not (intended humor by the author or not) and the second one "Funniness Score Prediction", is for predicting a score value into 5-star ranking, supposing it is a joke.</p><p>Participants were provided with a human-annotated corpus of 30000 Spanish tweets <ref type="bibr" target="#b5">[6]</ref>, divided in 24000 and 6000 for training and test respectively. The training subset contains 9253 tweets with funny content and 14747 tweets considered as non humorous. As could be observed, the classes distribution are slightly unbalanced, hence a difficulty is added to learn automatically the models.</p><p>System evaluation metrics were used and reported by the organizers. They use F1 measure on humor class for the subtask of "Humor Detection", moreover, precision, recall and accuracy were also reported.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Our U O U P V 2 System</head><p>The motivation behind of our approach are firstly to investigate the capability of Recurrent Neural Network, specifically, the Gated Recurrent Unit (GRU) <ref type="bibr" target="#b7">[8]</ref> to capture long-term dependencies. They showed to be able to learn the dependencies in lengths of considerably large sequences. GRU networks simplified the complexity of the LSTM networks <ref type="bibr" target="#b10">[11]</ref>, being computationally more efficient. Moreover, attention mechanisms have endowed these networks with a powerful strategy to increase their effectiveness achieving better results <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b23">24,</ref><ref type="bibr" target="#b11">12]</ref>. Recently, the initial hidden state of the recurrent neural network has been a successful explored way to inform the networks with contextual information <ref type="bibr" target="#b24">[25]</ref>. Secondly, humor recognition based on features engine and supervised learning have been well studied in previous research papers. These features have proved to be good indicators and markers of humor in text. For these reasons, in this approach we propose a method that enrich the Attention-based GRUs model with linguistic knowledge which is passed to the network using the initial hidden state. In Section 3.1 we describe the tweets preprocessing phase. Following, in Section 3.2 we present the linguistic features used for encoding humorous content. Finally, in Section 3.3 we introduce the neural network model and the way in which linguistic features are introduced. The Figure <ref type="figure" target="#fig_0">1</ref> shows the overall architecture of our system. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Preprocessing</head><p>In the preprocessing step, the tweets are cleaned. Firstly, the emoticons, urls, hashtags, mentions, twitter-reserve words as RT (for retweet) and FAV (for favorite) are recognized and replaced by a corresponding wildcard which encodes the meaning of these special words. Afterwards, tweets are morphologically analyzed by FreeLing <ref type="bibr" target="#b17">[18]</ref>. In this way, for each resulting token, its lemma is assigned. Then, the tweets are represented as vectors with a word embedding model. This embedding was generated by using the FastText algorithm <ref type="bibr" target="#b1">[2]</ref> from the Spanish Billion Words Corpus <ref type="bibr" target="#b2">[3]</ref> and an in-house background corpus of 9 millions of Spanish tweets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Linguistic Features</head><p>In our work, we explored several linguistic features useful for humor recognition in texts <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b0">1,</ref><ref type="bibr" target="#b4">5]</ref> which can be grouped in three main categories: Stylistic, Structural and Content, and Affective. Particularly, we considered stylistic features such as: length, dialog markers, quotation, punctuation marks, emphasized words, url, emoticons, hashtag, etc. Features for capturing lexical and semantic ambiguity, sexual, obscene, animal and human-related terms, etc., were considered as Content and Structural. Finally, due to the relation of humor with expressions of sentiment and emotions we used features for capturing affects, attitudes, sentiments and emotions. For more details about the features see <ref type="bibr" target="#b16">[17]</ref>.</p><p>Notice that, our proposal did not consider the positional features used in <ref type="bibr" target="#b16">[17]</ref>. Moreover taking into account the close relation between irony and humor and motivated by the results presented in <ref type="bibr" target="#b13">[14]</ref> we include psycho-linguistic features extracted from the LIWC <ref type="bibr" target="#b18">[19]</ref>. This resource contains about 4,500 entries distributed in 65 categories. Specifically, for this work we decided to use all categories as independent features.</p><p>Taking into account the previous features, we represent each message by one vector V Ti with dimensionality equal to 165. Also, in order to reduce and improve this representation we applied a feature selection method. Specifically we use the Wilcoxon Rank-sum test for paired samples. By using this test all features were ranked considering their p − value.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Recurrent Network Architecture</head><p>We propose a model that consists in a BiGRU neural network at the word level. Each time step t the BiGRU gets as input a word vector w t . Afterward, an attention layer is applied over each hidden state h t . The attention weights are learned using the concatenation of the current hidden state h t of the BiGRU and the past hidden state s t−1 in the second BiGRU layer. Finally, the target humor of the tweet is predicted by an FFN with one hidden layer, and an output layer with two neurons. Our overall architecture is described in the following sections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">First BiGRU Layer</head><p>In NLP problems, standard GRU receives sequentially (left to right order) at each time step a word embedding w t and produces a hidden state h t . Each hidden state h t is calculated as follow:</p><formula xml:id="formula_0">z t = σ(W (z) x t + U (z) h t−1 + b (z) ) (update gate) r t = σ(W (r) x t + U (r) h t−1 + b (r) )</formula><p>(reset gate) ĥt = tanh(W ( ĥ) x t + U ( ĥ) h t−1 + b ( ĥ) ) (memory cell)</p><formula xml:id="formula_1">h t = z t ⊕ h t−1 + (1 − z t ) ⊕ ĥt</formula><p>Where all W ( * ) , U ( * ) and b ( * ) are parameters to be learned during training. Function σ is the sigmoid function and ⊕ stands for element-wise multiplication.</p><p>Bidirectional GRU, on the other hand, makes the same operations as standard GRU, but processing the incoming text in a left-to-right and a right-to-left order in parallel. Thus, it outputs two hidden state at each time step − → h t and ← − h t . The proposed method uses a BiGRU network which considers each new hidden state as the concatenation of these two ĥt = [</p><formula xml:id="formula_2">− → h t , ← − h t ].</formula><p>The idea behind this BiGRU layer is to capture long-range and backwards dependencies simultaneously. In this layer is where the linguistic information is passed throughout the model. We initialized both initial hidden state [</p><formula xml:id="formula_3">− → h 0 = g(T i ), ← − h 0 = g(T i )]</formula><p>where g(.) receives a tweet and returns a vector which encodes contextual and linguistic knowledge g(T i ) = V Ti .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5">Attention Layer</head><p>With an attention mechanism we allow the BiGRU to decide which segment of the sentence should "attend". Importantly, we let the model learn what to attend on the basic of the input sentence and what it has produced so far. Let H ∈ R 2×N h ×T the matrix of hidden states [ ĥ1 , ĥ2 , . . . , ĥT ] produced by the first BiGRU layer, where N h is the size of the hidden state and T is the length of the given sequence. The goal is then to derive a context vector c t that captures relevant information and feeds it as input to the next BiGRU layer. Each c t is calculate as follow:</p><formula xml:id="formula_4">c t = T t =1 α t,t ĥt α t,i = exp(β(s t−1 , ĥi )) T j=1 exp(β(s t−1 , ĥj )) β(s i , h j ) = V T a * tanh(W a × [s i , ĥj ])</formula><p>Where W a and V a are the trainable attention parameters, s t−1 is the past hidden state of the second BiGRU layer and ĥt is the current hidden state. The idea of the concatenation layer is to take into account not only the input sentence but also the past hidden state to produce the attention weights.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.6">Second BiGRU Layer</head><p>The goal of this layer is to obtain a deep dense representation of the message with the intention to determine whether the tweet is humorous or not. This network at each time step receives the context vector c t which is propagated until the final hidden state s T . This vector is a high level representation of the tweet. Afterwards, it is passed to a feed forward network (FFN) with 3 hidden layers, and we use a softmax layer at the end as follow:</p><formula xml:id="formula_5">ŷ = sof tmax(W 0 × dense 1 + b 0 ) dense 1 = relu(W 1 × dense 2 + b 1 ) dense 2 = relu(W 2 × dense 3 + b 2 ) dense 3 = relu(W 3 × s T + b 3 )</formula><p>Where W j and b j (j = 0, ...3) denote the weight matrices and bias vectors for the last three layers with a softmax at the end. Finally, cross entropy is used as the loss function, which is defined as:</p><formula xml:id="formula_6">L = − i y i * log( ŷi )</formula><p>Where y i is the ground true classification of the tweet (humor vs. not humor) and ŷi is the predicted value by the model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experiments and Results</head><p>For tuning some parameters of the proposed model we used a stratified k-fold cross validation with 5 partitions on the training dataset. At this time, a features selection process was not performed, therefore we consider all linguistic features. During the training phase, we fixed some hyper-parameters, concretely: the batch size =256, epochs=10, units of the GRU cell was defined as 256, op-timizer="Adam" and dropout in the GRU cells=0.3. After that, we evaluated different subsets of linguistic features, particularly, five setting of features were explored. The considered subsets were: N o F ea (linguistic information is not considered), 64 F ea (the 64 best ranked features according to p−value), 128 F ea (the 128 best ranked features according to p − value), 141 F ea (all features with p − value ≥ 0.05) and All F ea (all linguistic features). As can be observed in the Table <ref type="table" target="#tab_0">1</ref>, a slight improvement is obtained when linguistic features were passed to the model. Particularly, the subset of F ea 128 achieved the best F1 score (F1 h=0.785) in the humor class. Also, when linguistic information was missing a gradual drop of 3.5%, in term of F1-score was observed in the humor class.</p><p>Regarding official evaluation and results, for the system's submission, participants were allowed to send more than one model till a maximum of 10 possible runs. Taking into account the results showed in the Table <ref type="table" target="#tab_0">1</ref> we submitted three runs. The difference among them is the number of linguistic features considered for informing the U O U P V 2 model. In Run1 we use the subset of features F ea 64, for Run2 we used the subset of features F ea 128 while in Run3 the subset of features F ea 141 was considered. We achieved 0.765, 0.773, and 0.765, in terms of F1 score in the humor class, for Run1, Run2 and Run3 respectively. These values are consistent with the results obtained in the training phase. Regarding the official ranking, a first glance at Table <ref type="table" target="#tab_1">2</ref> allows to observe that our best submissions (U O U P V 2 ) was ranked as 7th from a total of 18 of teams.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In this paper we presented our modification of the UO UPV system (U O U P V 2 ) for the task of humor recognition (HAHA) at IberLEF 2019. We only participated in the "Humor Detection" subtask and ranked 7th out of 18 team. Our proposal combines linguistic features with an Attention-based BiGRU Neural Network. The model consists of a Bidirectional GRU neural network with an attention mechanism that allows to estimate the importance of each word and then, this context vector is used with another BiGRU model to estimate whether the tweet is humorous or not. Regarding the feature selection, the best result was achieved when the 128 best ranked (according to p−value) features were considered. The results, also shown that adding linguistic information through initial hidden state caused an improvement in the effectiveness based on F1-measure. munication in social media: FAKE news and HATE speech (PGC2018-096212-B-C31).</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Overall architecture of the system U O U P V2</figDesc><graphic coords="4,186.64,191.05,242.07,263.21" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Results of the U O U P V2 system using the feature selection strategy on the training dataset.</figDesc><table><row><cell cols="4">Features Pr h Rc h F1 h Pr noh Rc noh F1 noh F1 AVG</cell></row><row><cell>No Fea 0.818 0.695 0.750 0.826</cell><cell>0.902</cell><cell>0.862</cell><cell>0.806</cell></row><row><cell>All Fea 0.795 0.751 0.767 0.851</cell><cell>0.871</cell><cell>0.859</cell><cell>0.813</cell></row><row><cell>Fea 64 0.829 0.717 0.763 0.838</cell><cell>0.901</cell><cell>0.867</cell><cell>0.815</cell></row><row><cell>Fea 128 0.756 0.817 0.785 0.880</cell><cell>0.834</cell><cell>0.856</cell><cell>0.820</cell></row><row><cell>Fea 141 0.779 0.769 0.771 0.858</cell><cell>0.859</cell><cell>0.857</cell><cell>0.814</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Official results for the Humor Detection subtask</figDesc><table><row><cell>Rank</cell><cell>Team</cell><cell>F1</cell><cell>Pr</cell><cell>Rc</cell><cell>Acc</cell></row><row><cell>1</cell><cell>adilism</cell><cell>0.821</cell><cell>0.791</cell><cell>0.852</cell><cell>0.855</cell></row><row><cell>2</cell><cell>kevinb</cell><cell>0.816</cell><cell>0.802</cell><cell>0.831</cell><cell>0.854</cell></row><row><cell>3</cell><cell>bfarzin</cell><cell>0.810</cell><cell>0.782</cell><cell>0.839</cell><cell>0.846</cell></row><row><cell>4</cell><cell>jamestjw</cell><cell>0.798</cell><cell>0.793</cell><cell>0.804</cell><cell>0.842</cell></row><row><cell>5</cell><cell>job80</cell><cell>0.788</cell><cell>0.758</cell><cell>0.819</cell><cell>0.828</cell></row><row><cell>6</cell><cell>jimblair</cell><cell>0.784</cell><cell>0.745</cell><cell>0.827</cell><cell>0.822</cell></row><row><cell>7</cell><cell>U O U P V2</cell><cell>0.773</cell><cell>0.780</cell><cell>0.765</cell><cell>0.824</cell></row><row><cell>8</cell><cell cols="2">vaduvabogdan 0.772</cell><cell>0.729</cell><cell>0.820</cell><cell>0.811</cell></row><row><cell>. . .</cell><cell>. . .</cell><cell>. . .</cell><cell>. . .</cell><cell>. . .</cell><cell>. . .</cell></row><row><cell>18</cell><cell>premjithb</cell><cell>0.495</cell><cell>0.478</cell><cell>0.514</cell><cell>0.591</cell></row><row><cell>19</cell><cell>hahaPLN</cell><cell>0.440</cell><cell>0.394</cell><cell>0.497</cell><cell>0.505</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2019)</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The work of the second author was partially funded by the Spanish MICINN under the research project MISMIS-FAKEnHATE on Misinformation and Miscom-</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Automatic Detection of Irony and Humour in Twitter</title>
		<author>
			<persName><forename type="first">F</forename><surname>Barbieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Saggion</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Fifth International Conference on Computational Creativity</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Enriching Word Vectors with Subword Information</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bojanowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mikolov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Transactions of the ACL</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="135" to="146" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Cardellino</surname></persName>
		</author>
		<ptr target="http://crscardellino.me/SBWCE/" />
		<title level="m">Spanish Billion Words Corpus and Embeddings</title>
				<imprint>
			<date type="published" when="2016-05-04">2016. May 4, 2018</date>
		</imprint>
	</monogr>
	<note>Online</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Overview of the HAHA Task : Humor Analysis based on Human Annotation at IberEval 2018</title>
		<author>
			<persName><forename type="first">S</forename><surname>Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chiruzzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rosá</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Martínez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Montalvo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Carrillo De Albornoz Cuadrado</surname></persName>
		</editor>
		<meeting>the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing<address><addrLine>SEPLN; Sevilla, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="187" to="194" />
		</imprint>
	</monogr>
	<note>CEUR-WS.org</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Is This a Joke? Detecting Humor in Spanish Tweets</title>
		<author>
			<persName><forename type="first">S</forename><surname>Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Garat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Moncecchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Ibero-American Conference on Artificial Intelligence</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="139" to="150" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A Crowd-Annotated Spanish Corpus for Humor Analysis</title>
		<author>
			<persName><forename type="first">Santiago</forename><surname>Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luis</forename><surname>Chiruzzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aiala</forename><surname>Rosá</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Diego</forename><surname>Garat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Moncecchi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The 6th International Natural Language Processing for Social Media</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>Proceedings of SocialNLP 2018</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">SRHR at SemEval-2017 Task 6: Word Associations for Humour Recognition</title>
		<author>
			<persName><forename type="first">A</forename><surname>Cattle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">W</forename><surname>Bay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th International Workshop on Semantic Evaluations</title>
				<meeting><address><addrLine>SemEval-</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="401" to="406" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling</title>
		<author>
			<persName><forename type="first">J</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">¸</forename><surname>Gülçehre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<idno>CoRR abs/1412.3</idno>
		<ptr target="http://arxiv.org/abs/1412.3555" />
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Where is the humor in verbal irony ?</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Gibbs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Bryant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">L</forename><surname>Colston</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Humor</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="575" to="595" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">QUB at SemEval-2017 Task 6: Cascaded Imbalanced Classification for Humor Analysis in Twitter</title>
		<author>
			<persName><forename type="first">X</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Toner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th International Workshop on Semantic Evaluations</title>
				<meeting><address><addrLine>SemEval-</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="380" to="384" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Long short-term memory</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hochreiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schmidhuber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural computation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="1735" to="1780" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Sentiment Analysis Model Based on Structure Attention Mechanism</title>
		<author>
			<persName><forename type="first">K</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">UK Workshop on Computational Intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="17" to="27" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Effective approaches to attention-based neural machine translation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Luong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Pham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1508.04025</idno>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Automatic Detection of Satire in Twitter: A psycholinguistic-based approach</title>
		<author>
			<persName><forename type="first">María</forename><surname>Del</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pilar</forename><surname>Salas-Zárate</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mario</forename><forename type="middle">Andrés</forename><surname>Paredes-Valverde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Miguel</forename><surname>Ángel Rodriguez-García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rafael</forename><surname>Valencia-García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A H</forename></persName>
		</author>
		<idno type="DOI">10.1016/j.knosys.2017.04.009</idno>
		<ptr target="http://dx.doi.org/10.1016/j.knosys.2017.04.009" />
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Characterizing Humour: An Exploration of Features in Humorous Texts</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mihalcea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pulman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Intelligent Text Processing and Computational Linguistics</title>
				<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="337" to="347" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Learning to laugh (automatically): computational models for humor recognition</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mihalcea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Strapparava</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Intelligence</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="126" to="142" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">UO UPV : Deep Linguistic Humor Detection in Spanish Social Media</title>
		<author>
			<persName><forename type="first">R</forename><surname>Ortega-Bueno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Muñiz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Medina-Pagola</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Martínez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Montalvo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Carrillo-De Albornoz</surname></persName>
		</editor>
		<meeting>the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing<address><addrLine>SEPLN; Sevilla, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="203" to="213" />
		</imprint>
	</monogr>
	<note type="report_type">CEUR-WS.org</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">FreeLing 3.0: Towards Wider Multilinguality</title>
		<author>
			<persName><forename type="first">L</forename><surname>Padró</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Stanilovsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the (LREC</title>
				<meeting>the (LREC</meeting>
		<imprint>
			<date type="published" when="2012">2012. 2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Linguistic inquiry and word count: LIWC</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Pennebaker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Francis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Booth</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2001">2001. 2001</date>
			<publisher>Lawrence Erlbaum Associates</publisher>
			<biblScope unit="volume">71</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">From humor recognition to irony detection: The figurative language of social media</title>
		<author>
			<persName><forename type="first">A</forename><surname>Reyes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Buscaldi</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.datak.2012.02.005</idno>
		<ptr target="http://dx.doi.org/10.1016/j.datak.2012.02.005" />
	</analytic>
	<monogr>
		<title level="j">Data and Knowledge Engineering</title>
		<imprint>
			<biblScope unit="volume">74</biblScope>
			<biblScope unit="page" from="1" to="12" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">INGEOTEC at IberEval 2018 Task HaHa: µTC and EvoMSA to Detect and Score Humor in Texts</title>
		<author>
			<persName><forename type="first">V</forename><surname>Salgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">S</forename><surname>Tellez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Gonzalo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Martínez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Montalvo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Carrillo De Albornoz Cuadrado</surname></persName>
		</editor>
		<meeting>the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018) co-located with 34th Conference of the Spanish Society for Natural Language Processing<address><addrLine>SEPLN; Sevilla, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="195" to="202" />
		</imprint>
	</monogr>
	<note>CEUR-WS.org</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Recognizing Humor Without Recognizing Meaning</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sjobergh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Araki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Workshop on Fuzzy Logic and Applications</title>
				<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="469" to="476" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">#WarTeam at SemEval-2017 Task 6: Using Neural Networks for Discovering Humorous Tweets</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Turcu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Alexa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Amarandei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Herciu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Scutaru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Iftene</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th International Workshop on Semantic Evaluations</title>
				<meeting><address><addrLine>SemEval-</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="407" to="410" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Others: Attention-based lstm for aspect-level sentiment classification</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing</title>
				<meeting>the 2016 Conference on Empirical Methods in Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="606" to="615" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Contextual Recurrent Neural Networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wenke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fleming</surname></persName>
		</author>
		<idno>CoRR abs/1902.0</idno>
		<ptr target="http://arxiv.org/abs/1902.03455" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Duluth at SemEval-2017 Task 6: Language Models in Humor Detection</title>
		<author>
			<persName><forename type="first">X</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pedersen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">11th International Workshop on Semantic Evaluations</title>
				<meeting><address><addrLine>SemEval-</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="385" to="389" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Attention Based LSTM for Target Dependent Sentiment Classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Tu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="5013" to="5014" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Hierarchical attention networks for document classification</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Smola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hovy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
				<meeting>the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1480" to="1489" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Attention-based LSTM with Multi-task Learning for Distant Speech Recognition</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. Interspeech</title>
				<meeting>Interspeech</meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
			<biblScope unit="page" from="3857" to="3861" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
