<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Overview of HAHA at IberLEF 2019:</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luis Chiruzzo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Santiago Castro</string-name>
          <email>sacastro@umich.edu</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mathias Etcheverry</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Diego Garat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juan Jose Prada</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aiala Rosa</string-name>
          <email>aialarg@fing.edu.uy</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universidad de la Republica</institution>
          ,
          <country country="UY">Uruguay</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Michigan</institution>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>132</fpage>
      <lpage>144</lpage>
      <abstract>
        <p>This paper presents the results of the HAHA task at IberLEF 2019, the second edition of the challenge on automatic humor recognition and analysis in Spanish. The challenge consists of two subtasks related to humor in language: automatic detection and automatic rating of humor in Spanish tweets. This year we used a corpus of 30,000 annotated Spanish tweets labeled as humorous or non-humorous and the humorous ones contain a funniness score. A total of 18 participants submitted their systems obtaining good results overall, we present a summary of their systems and the general results for both subtasks.</p>
      </abstract>
      <kwd-group>
        <kwd>Humor</kwd>
        <kwd>Computational Humor</kwd>
        <kwd>Humor Detection ural Language Processing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>This paper describes the results of the second edition of the task Humor
Analysis based on Human Annotation (HAHA), part of the IberLEF 2019
workshop.</p>
      <p>
        Despite humor and laughter being universal and fundamental human
experiences [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ], it has only recently become an active area of research within Machine
Learning and Computational Linguistics [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ]. Some previous works focus on the
computational processing of humor [
        <xref ref-type="bibr" rid="ref10 ref34 ref49">34,49,10</xref>
        ], but a characterization of humor
that allows its automatic recognition and generation is far from being speci ed,
even though it has been historically studied from a psychological [
        <xref ref-type="bibr" rid="ref19 ref25">19,25</xref>
        ],
cognitive [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ] and linguistic [
        <xref ref-type="bibr" rid="ref3 ref44 ref46">44,3,46</xref>
        ] standpoint. The aim of this task is to gain better
insight in what is humorous and what causes laughter, while at the same time
fostering the Computational Humor eld.
      </p>
      <p>
        This is the second edition of the HAHA evaluation challenge. In 2018
edition [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] three teams took part in the competition to assess the humor value
and funniness score in a corpus of 20,000 tweets. Although the results for this
rst edition of the competition were satisfactory, there is room for improvement.
There have been similar or related evaluation campaigns in the past, for
example: SemEval-2015 Task 11 [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] proposed to work on gurative language, such
as metaphors and irony, but focused on Sentiment Analysis. SemEval-2017 Task
6 [
        <xref ref-type="bibr" rid="ref43">43</xref>
        ] presented a similar task to this one as well. Additionally, this campaign
is related to other evaluation campaigns focused on subjectivity analysis in
language such as irony detection [
        <xref ref-type="bibr" rid="ref53">53</xref>
        ] [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and sentiment analysis [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ].
      </p>
      <p>
        In order to address humor in this challenge, we need a working de nition of
what we will call humor and what could be considered a humorous tweet. In the
literature, it is generally accepted that a fundamental part of de ning humor is
the perception of something being funny [
        <xref ref-type="bibr" rid="ref45">45</xref>
        ], which means the opinion of human
subjects is essential for determining if something is humorous. However, we must
also consider the intent of the author of being humorous or not. In this challenge
we de ne two dimensions: rst, we consider a text (tweet) is humorous if the
intention of the author was to be funny, as assessed by human judges. Second,
we consider how funny a tweet is according to those human judges, but only
for the tweets that have already been regarded as attempted humor. These two
dimensions are translated into the two subtasks in this challenge.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Task description</title>
      <p>The following subtasks are proposed for this track:
2.1</p>
      <sec id="sec-2-1">
        <title>Subtask 1: Humor Detection</title>
        <p>This subtask has the aim to tell if a tweet attempts to be humorous (if the
intention of the author was to be humorous or not). To do this, a set of
training tweets annotated with their corresponding humorous class was given to the
participants. The performance metrics used for this subtask were F1 score for
the \humorous" category and accuracy, being the F1 score the main measure for
this subtask (while accuracy is used as another reference).</p>
        <p>
          Two baselines were computed for this subtask over the test data, although
nally the rst one was published to the participants:
random: Decide randomly with a 50% probability whether a tweet is humorous
or not. This baseline achieves 42.0% F1 score and 50.5% accuracy for the
humorous class over the test corpus. This was the only published baseline
for Subtask 1.
dash: Select all tweets that start with a dash as humorous (em dash, among
many other Unicode variants). This baseline was based on [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], in which
the authors found that in Twitter you can get quite decent results given
that many tweets considered humorous were dialogues with the utterances
delimited by dashes. This heuristic has a high precision (94.5%), as almost
all the dialogues in tweets are jokes, but a low recall (16.3%) because there
are more kinds of humorous tweets. The baseline achieves 27.8% F1 score
and 66.9% accuracy for the humorous class over the test corpus.
        </p>
        <p>Note that a majority baseline does not make sense using this evaluation
metric because the F1 score is zero or unde ned.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Subtask 2: Funniness Score Prediction</title>
        <p>The aim of this subtask is to predict how funny an average person would consider
a tweet, taking as the ground truth the average funniness value of the tweets in
a corpus. The funniness score is a value from one (attempted humour but not
funny) to ve (hilarious). This subtask was evaluated using Root Mean Squared
Error (RMSE).</p>
        <p>We calculated two baselines for this subtask over the test data, but we nally
published only one of them:
random: Choose the value 3 (middle of the scale) for all the tweets. The root
mean squared error for this baseline over the test data is 2.455. This was the
only published baseline for Subtask 2.
average: Choose the average funniness score for the training corpus (2.0464)
for all test tweets. The root mean squared error for this baseline over the
test data is 1.651.</p>
        <p>It is important to notice that the valid tweets for this subtask are only the
humorous ones, as we consider that the average funniness score is only well
de ned for this category. However, as the participants could not know in advance
which of the test tweets were humorous, we asked them to rate all the tweets in
the test set, so then the evaluation metric considers only those that truly belong
to the humorous class.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Corpus</title>
      <p>
        The annotation process for this task followed the same approach as in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and
[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. We extracted tweets from speci c humorous accounts and random tweets
using the Twitter API using Tweepy3, then we used a web application to
crowdsource the labeling of the tweets. Using the app, each annotator has to label a
tweet as attempted humor or not attempted humor, and if the annotator chose
the former, a score between one and ve has to be chosen for the tweet. The
main di erences between the annotation process this year and last year are the
following:
{ We extracted the new tweets from the same fty humorous Twitter accounts
we used last year plus all the tweets from thirteen new accounts we found
this year from varied Spanish dialects (10,000 new tweets in total).
      </p>
      <sec id="sec-3-1">
        <title>3 https://www.tweepy.org/</title>
        <p>{ We extracted 3,000 randomly sampled real-time tweets in Spanish using the
Twitter GET statuses/sample endpoint4 on February 4th and February 7th,
2019.
{ The dataset from HAHA 2018 contained some instances of duplicate or
nearduplicate tweets (tweets that only di ered in a few words and did not change
their semantics signi cantly). We used a semi-automatic process to detect
and remove duplicate instances: rst we collected all tweet pairs whose
Jaccard coe cient was greater than 0.5, then we manually examined those pairs
and classi ed them in equivalence classes, taking only one tweet from each
class for the nal corpus. 1,278 tweets were removed from last year's corpus.
{ Using the web app5, we crowd-sourced the annotation of all the new tweets
and the tweets that had received less than ve annotations during the HAHA
2018 annotation process and were considered humorous. The annotation
process took part between February and March, 2019. Almost 800 participants
took part during the annotation process producing 75,000 votes.</p>
        <p>The nal corpus consists of 30,000 tweets, where 11,595 (38.7%) are
humorous. This is marginally more balanced than that of HAHA 2018, which had
36.8% humorous tweets. This version of the corpus is also cleaner as many
nearduplicates have been pruned and we tried to avoid including new ones. We also
made sure that all the humorous tweets had at least ve votes and all the
nonhumorous had at least three negative votes.</p>
        <p>Text
| Mami, &gt;a que no adivinas donde estoy?
| Hijo, ahora no puedo hablar, llamame
luego.
| No puedo, solo tengo derecho a una
llamada. . .
| Mommy, can you guess where I am?
| Son, I can't talk now, call me later.</p>
        <p>| I can't, I'm only entitled to one phone call. . .</p>
        <p>The corpus is divided into 80% training and 20% test. The training set
contains both the training and test partitions from last year, and some new tweets
to make a total of 24,000 tweets. The new test partition consists entirely of new
tweets (6,000). Table 1 shows an example of an instance from the dataset.
4</p>
        <p>Systems descriptions
101 teams signed up (asked for the dataset) in the CodaLab competition website6
but only 18 teams submitted their test predictions at least once. Table 2 lists the
submitting teams and related information. We describe hereafter each of their
best systems, ordered by their position in F1 score of the Subtask 1. We do not
list the teams jamestjw, vaduvabogdan, Taha, LadyHeidy and jmeaney as they
did not submit a paper nor provide information about their models.</p>
        <p>Team</p>
        <p>
          CodaLab username Submissions
adilism [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ] used the multilingual cased BERT-base [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] pretrained model along
with the fastai library [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]. They rst continued its training with the BERT's
unsupervised language model objective on the dataset provided for the
competition, without the labels. Then they ne-tuned it separately for each task
with one-cycle learning-rate style [
          <xref ref-type="bibr" rid="ref50">50</xref>
          ] and discriminative ne-tuning [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]. For
the Subtask 1, they used a linear layer on top of the output of the last layer for
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>6 https://competitions.codalab.org/competition/22194/</title>
        <p>
          the [CLS] token with a tanh activation linear, then a dropout layer and another
linear layer with a binary cross-entropy loss. Apart from this, they used a
binarized Multinomial Nave Bayes as proposed in [
          <xref ref-type="bibr" rid="ref55">55</xref>
          ] with unigram and bigram
tf-idf features, and combine its predictions with those of the neural network with
logistic regression to obtain the nal predictions. For the Subtask 2, they changed
the BERT model to use the mean-squared loss and combine the predictions with
a gradient-boosted tree model from LightGBM [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ] instead.
        </p>
        <p>
          Kevin &amp; Hiromi7 built an ensemble of 5 models: a forward ULMFiT model
[
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], a backward ULMFiT model (both with fastai [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]), a pre-trained
multimodal cased BERT-base model, a pre-trained multimodal uncased BERT-base
model and an SVM with Naive Bayes features from [
          <xref ref-type="bibr" rid="ref55">55</xref>
          ] (NBSVM). They
combined the predictions with a linear regression model and drew a decision
threshold graph to decide win which point the F1 score is maximized. The rst two
models were pre-trained (along with a SentencePiece8 model) on 500,000 new
tweets. For the Subtask 2, they use the same ensemble without the NBSVM,
and the output is only one score. In the end, they report that they made the
model for the Subtask 1 bene t from the model trained for the Subtask 2.
bfarzin [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] trained ULMFiT [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] from scratch using fastai [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] on 475,143
new tweets and tokenizing with Byte Pair Encoding (BPE) [
          <xref ref-type="bibr" rid="ref48">48</xref>
          ] (with
SentencePiece). Then, they ne-tuned the Language Model on the competition data
(without labels) and they ne-tuned each Subtask in a supervised way
(separately). They reported that they also have tried with Transformer [
          <xref ref-type="bibr" rid="ref54">54</xref>
          ] models
and LSTMs instead of QRNNs [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] but found similar performance. The Language
Model training was executed with one-cycle learning rate [
          <xref ref-type="bibr" rid="ref50">50</xref>
          ] and for the
taskspeci c training they rst froze the pre-trained weights for a third of the epochs
but then continued the training by ne-tuning them. The best weight
initialization were obtained by sampling 20 random seeds. Two linear layers were used as
the task-speci c layers with ULMFiT. For the Subtask 1 they used cross-entropy
loss with label smoothing [
          <xref ref-type="bibr" rid="ref42">42</xref>
          ] and they over-sampled the minority class with the
Synthetic Minority Oversampling Technique [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. For the Subtask 2, they used
the non-humorous instances as \0" and mean-squared error loss.
INGEOTEC [
          <xref ref-type="bibr" rid="ref39">39</xref>
          ] used TC [
          <xref ref-type="bibr" rid="ref52">52</xref>
          ] with sparse and dense word representations.
Linear SVM seemed to be the best approach for the Subtask 1 while and SVM
regressor was the one for Subtask 2. For text classi cation, they also explored
fastText [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] and air [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] along with multiple combinations of token embeddings
which range from simple characters to BERT [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], as well as EvoMSA [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] and
B4MSA [
          <xref ref-type="bibr" rid="ref51">51</xref>
          ], but did not obtain an improvement.
        </p>
        <p>
          BLAIR GMU [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ] used the multilingual cased pre-trained BERT-base [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
The authors took the last-layer output corresponding to the [CLS] token and
added a linear output for classi cation in the Subtask 1 and use binary
crossentropy loss, while they use mean-squared error for Subtask 2. The authors also
7 See http://kevinbird15.com/2019/06/26/High-Level-Haha-Architecture.html
for more information.
8 https://github.com/google/sentencepiece
improved the model for Subtask 1 by considering not only the correct labels but
also the output predictions of their model for Subtask 2. The authors do not
report whether they use BERT as a feature extraction model or if they ne-tune
it.
        </p>
        <p>
          UO UPV2 [
          <xref ref-type="bibr" rid="ref38">38</xref>
          ] performed lemmatization using FreeLing [
          <xref ref-type="bibr" rid="ref40">40</xref>
          ], then used a
Spanish word embedding collection developed in-house and hand-crafted features of
the type stylistic, structural and content, and a ective (including features based
on LIWC [
          <xref ref-type="bibr" rid="ref41">41</xref>
          ]), to create a vector used as the initial hidden state of a BiGRU [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]
neural network with attention followed by three dense layers.
        </p>
        <p>
          UTMN [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] approached it as a multi-task learning setting with hard parameter
sharing [
          <xref ref-type="bibr" rid="ref47">47</xref>
          ]: a neural network that processes several types of features in parallel
with a common scheme, then concatenates the layers and feeds the outcome to a
dense layer. The four concatenated features are: a sentence representation coming
from Spanish word embeddings [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] input to a 1D-CNN and a Max Pooling, tf-idf
features restricted to 5,000 words plus two dense layers, sentiment and topic
modeling features with two dense layers, and some format and other types of
hand-crafted features.
        </p>
        <p>
          LaSTUS/TALN [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] developed a multi-task supervised learning scheme for
Humor along with Irony, Sentiment and Aggressiveness using dialect-speci c word
embeddings, a common BiLSTM layer and two dense layers as classi ers for each
task (including both Subtasks).
        </p>
        <p>
          Aspie96 [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] trained a character-level 1D-CNN with three layers followed by a
BiRNN and then a dense layer to output a binary value for the Subtask 1, and
a similar approach but with an output value of up to 5 for the Subtask 2.
OFAI{UKP [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ] used Gaussian Processes Preference Learning [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], training
Gaussian processes using three word representations (Spanish Twitter
embeddings [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], the average token frequency in a Wikipedia dump and the word's
lemma average polysemy) and several format hand-crafted features.
acattle [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] created a document tensor space for embedding tweets, considering
each tweet as a document, and trained Random Trees. They also tried
propagating the labels based on the Instance-based Learning technique k-Nearest
Neighbors, but this technique did not outperform the rst one.
garain [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] used Google Translate for transforming the sentences to English and
applied SenticNet5 [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] to get sentiment of the words. The authors transformed
the tweets into one-hot vectors and included some manually extracted features
of format and sentiment to train a BiLSTM neural network.
premjithb processed the tweets through an embeddings layer and then an
LSTM layer for the Subtask 1. For the Subtask 2, they applied doc2vec [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ]
and used linear regression.
        </p>
        <p>Team</p>
        <p>F1 Precision Recall Accuracy
adilism 82.1
Kevin &amp; Hiromi 81.6
bfarzin 81.0
jamestjw 79.8
INGEOTEC 78.8
BLAIR GMU 78.4</p>
        <p>UO UPV2 77.3
vaduvabogdan 77.2</p>
        <p>UTMN 76.0
LaSTUS/TALN 75.9</p>
        <p>Taha 75.7
LadyHeidy 72.5</p>
        <p>Aspie96 71.1
OFAI{UKP 66.0
acattle 64.0
jmeaney 63.6
garain 59.3
Amrita CEN 49.5</p>
        <p>Team</p>
        <p>RMSE
adilism 0.736
bfarzin 0.746
Kevin &amp; Hiromi 0.769</p>
        <p>jamestjw 0.798
INGEOTEC 0.822</p>
        <p>BLAIR GM 0.910
LaSTUS/TALN 0.919</p>
        <p>UTMN 0.945
acattle 0.963
Amrita CEN 1.074
average
garain</p>
        <p>
          Aspie96
OFAI{UKP
the models may incur in catastrophic forgetting (thus not leveraging the existing
knowledge) or over tting. It is also important to test several random seeds to get
robust results, as transfer learning based on pre-trained models such as BERT
show high variance. Fastai [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] proved to be useful and practical to accomplish
it for the best systems. Domain adaptation also seemed to be important, such
as continue the language model training with the competition dataset or new
tweets, as the pre-trained models are not well-suited to tweets. Apart from this,
multi-task learning, which is another way to leverage knowledge, seemed to be
useful to many teams based on their results and on what they have reported,
including bene ting one subtask from this competition from the other one. To take
advantage of multiple techniques, some teams built ensembles (e.g., ensembling
neural networks with Nave Bayes models) that boosted the results according to
what they reported. Lastly, we observed that teams signed up regularly during
the whole competition timeline, and that their sign up time did not show an
clear correlation with their later performance (i.e., teams that started later or
download the training data later did not perform worse in general).
6
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>We presented the HAHA (Humor Analysis based on Human Annotation) task at
IberLEF 2019. This automatic humor detection and analysis challenge consists
of two subtasks: identifying if a tweet attempts to be humorous or not, and
giving a funniness score for the humorous ones. Eighteen participants submitted
systems for Subtask 1, the best system achieved 82.1% F1 for the humorous
class and 85.5% accuracy. Thirteen participants submitted systems for Subtask
2, the best system achieved 0.736 in RMSE. All systems surpassed the random
baselines. The top scores in this edition of the competition also beat the top
scores achieved last year (79.7% F1 for Subtask 1 and 0.978 for Subtask 2),
although the corpora are di erent: this year's training set contains all training
and test set from last year and some more tweets, and this year's test set is
completely new.</p>
      <p>Given this year's interest in the task (more than a hundred teams applied to
the competition, eighteen teams sent their submissions) and that many of the
participants (and potential ones) do not speak Spanish as their main language,
it would be interesting to run a challenge similar to this one in other languages,
particularly in English. Even for Spanish, given the high variability of language
and humor across geography and demographics, it would be interesting to see
how this a ects the detection and rating of humor. To do this, we would need
larger corpora annotated by more people from di erent linguistic, geographical
and social background.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Akbik</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blythe</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vollgraf</surname>
          </string-name>
          , R.:
          <article-title>Contextual string embeddings for sequence labeling</article-title>
          .
          <source>In: COLING</source>
          <year>2018</year>
          , 27th International Conference on Computational Linguistics. pp.
          <volume>1638</volume>
          {
          <issue>1649</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Altin</surname>
            ,
            <given-names>L.S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alex</surname>
            <given-names>Bravo</given-names>
          </string-name>
          , Saggion, H.: LaSTUS/TALN at HAHA:
          <article-title>Humor Analysis based on Human Annotation</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Attardo</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raskin</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Script theory revis(it)ed: Joke similarity and joke representation model</article-title>
          .
          <source>Humor: International Journal of Humor Research</source>
          (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bojanowski</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grave</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joulin</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikolov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Enriching word vectors with subword information</article-title>
          .
          <source>Transactions of the Association for Computational Linguistics</source>
          <volume>5</volume>
          ,
          <issue>135</issue>
          {
          <fpage>146</fpage>
          (
          <year>2017</year>
          ). https://doi.org/10.1162/tacl a 00051, https://doi.org/ 10.1162/tacl_a_
          <fpage>00051</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Bradbury</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Merity</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiong</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Socher</surname>
          </string-name>
          , R.:
          <article-title>Quasi-recurrent neural networks</article-title>
          .
          <source>ArXiv abs/1611</source>
          .01576 (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Cambria</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poria</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hazarika</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwok</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Senticnet 5: Discovering conceptual primitives for sentiment analysis by means of context embeddings</article-title>
          .
          <source>In: ThirtySecond AAAI Conference on Arti cial Intelligence</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Cardellino</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <source>Spanish Billion Words Corpus and Embeddings (March</source>
          <year>2016</year>
          ), https://crscardellino.github.io/SBWCE/
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Overview of the HAHA Task: Humor Analysis based on Human Annotation at IberEval 2018</article-title>
          .
          <source>In: CEUR Workshop Proceedings</source>
          . vol.
          <volume>2150</volume>
          , pp.
          <volume>187</volume>
          {
          <issue>194</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garat</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moncecchi</surname>
          </string-name>
          , G.:
          <article-title>A Crowd-Annotated Spanish Corpus for Humor Analysis</article-title>
          .
          <source>In: Proceedings of the Sixth International Workshop on Natural Language Processing for Social Media</source>
          . pp.
          <volume>7</volume>
          {
          <issue>11</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Castro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cubero</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garat</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moncecchi</surname>
          </string-name>
          , G.:
          <article-title>Is This a Joke? Detecting Humor in Spanish Tweets</article-title>
          .
          <source>In: Ibero-American Conference on Arti cial Intelligence</source>
          . pp.
          <volume>139</volume>
          {
          <fpage>150</fpage>
          . Springer (
          <year>2016</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -47955-2 12
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Cattle</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Papalexakis</surname>
            ,
            <given-names>Z.Z.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>Generating Document Embeddings for Humor Recognition using Tensor Decomposition</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEURWS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Chawla</surname>
            ,
            <given-names>N.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bowyer</surname>
            ,
            <given-names>K.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>L.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kegelmeyer</surname>
            ,
            <given-names>W.P.</given-names>
          </string-name>
          :
          <article-title>Smote: synthetic minority over-sampling technique</article-title>
          .
          <source>Journal of arti cial intelligence research 16</source>
          ,
          <volume>321</volume>
          {
          <fpage>357</fpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Chu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghahramani</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Preference learning with gaussian processes</article-title>
          .
          <source>In: Proceedings of the 22nd international conference on Machine learning</source>
          . pp.
          <volume>137</volume>
          {
          <fpage>144</fpage>
          .
          <string-name>
            <surname>ACM</surname>
          </string-name>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Chung</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gulcehre</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cho</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Empirical evaluation of gated recurrent neural networks on sequence modeling</article-title>
          .
          <source>arXiv preprint arXiv:1412.3555</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Cignarella</surname>
            ,
            <given-names>A.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frenda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Basile</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosco</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patti</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , et al.:
          <article-title>Overview of the evalita 2018 task on irony detection in italian tweets (ironita)</article-title>
          .
          <source>In: Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA</source>
          <year>2018</year>
          ). vol.
          <volume>2263</volume>
          , pp.
          <volume>1</volume>
          {
          <issue>6</issue>
          .
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Czapla</surname>
            ,
            <given-names>B.F.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howard</surname>
          </string-name>
          , J.:
          <article-title>Applying a Pre-trained Language Model to Spanish Twitter Humor Prediction</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Deriu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lucchi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Luca</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Severyn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Muller,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Cieliebak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Jaggi</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>Leveraging large amounts of weakly supervised data for multi-language sentiment classi cation</article-title>
          .
          <source>In: Proceedings of the 26th international conference on world wide web</source>
          . pp.
          <volume>1045</volume>
          {
          <fpage>1052</fpage>
          .
          <string-name>
            <surname>International World Wide Web Conferences Steering Committee</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Devlin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>M.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toutanova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Bert: Pre-training of deep bidirectional transformers for language understanding</article-title>
          .
          <source>In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers). pp.
          <volume>4171</volume>
          {
          <issue>4186</issue>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Freud</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Strachey</surname>
          </string-name>
          , J.:
          <article-title>Jokes and Their Relation to the Unconscious</article-title>
          . Complete Psychological Works of Sigmund Freud,
          <string-name>
            <given-names>W. W.</given-names>
            <surname>Norton</surname>
          </string-name>
          &amp; Company (
          <year>1905</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Garain</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Humor Analysis based on Human Annotation(HAHA</article-title>
          )
          <article-title>-2019: Humor Analysis at Tweet Level using Deep Learning</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEURWS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Ghosh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veale</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shutova</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barnden</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reyes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          : Semeval-2015 task 11:
          <article-title>Sentiment analysis of gurative language in twitter</article-title>
          .
          <source>In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval</source>
          <year>2015</year>
          ). pp.
          <volume>470</volume>
          {
          <issue>478</issue>
          (
          <year>2015</year>
          ). https://doi.org/10.18653/v1/s15-
          <fpage>2080</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Giudice</surname>
          </string-name>
          , V.:
          <article-title>Aspie96 at HAHA</article-title>
          (IberLEF
          <year>2019</year>
          )
          <article-title>: Humor Detection in Spanish Tweets with Character-Level Convolutional RNN</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Glazkova</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ganzherli</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikhalkova</surname>
          </string-name>
          , E.: UTMN at HAHA@
          <article-title>IberLEF2019: Recognizing Humor in Spanish Tweets using Hard Parameter Sharing for Neural Networks</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Gra</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miranda-Jimenez</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tellez</surname>
            ,
            <given-names>E.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moctezuma</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Evomsa: A multilingual evolutionary approach for sentiment analysis</article-title>
          .
          <source>arXiv preprint arXiv:1812</source>
          .
          <volume>02307</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Gruner</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The Game of Humor: A Comprehensive Theory of Why We Laugh</article-title>
          . Transaction Publishers (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruder</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Universal language model ne-tuning for text classi cation</article-title>
          .
          <source>In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</source>
          . pp.
          <volume>328</volume>
          {
          <issue>339</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , et al.: fastai. https://github.com/fastai/fastai (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Ismailov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <source>Humor Analysis Based on Human Annotation Challenge at IberLEF</source>
          <year>2019</year>
          :
          <article-title>First-place Solution</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Ke</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meng</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finley</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Ma,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            ,
            <surname>Liu</surname>
          </string-name>
          , T.Y.:
          <article-title>Lightgbm: A highly e cient gradient boosting decision tree</article-title>
          .
          <source>In: Advances in Neural Information Processing Systems</source>
          . pp.
          <volume>3146</volume>
          {
          <issue>3154</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Le</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mikolov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Distributed representations of sentences and documents</article-title>
          .
          <source>In: International conference on machine learning</source>
          . pp.
          <volume>1188</volume>
          {
          <issue>1196</issue>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Mao</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>W.:</given-names>
          </string-name>
          <article-title>A BERT-based Approach for Automatic Humor Detection and Scoring</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ford</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>The psychology of humor: An integrative approach</article-title>
          . Academic press (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Mart</surname>
            nez-Camara,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Almeida Cruz</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <article-title>D az-</article-title>
          <string-name>
            <surname>Galiano</surname>
            ,
            <given-names>M.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Estevez Velarde</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a-Cumbreras,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a-Vega,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Gutierrez</given-names>
            <surname>Vazquez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Montejo</surname>
          </string-name>
          <string-name>
            <surname>Raez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Montoyo</surname>
          </string-name>
          <string-name>
            <surname>Guijarro</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          , Mun~oz Guillena,
          <string-name>
            <surname>R.</surname>
          </string-name>
          , Piad Mor s,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Villena-Roman</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          :
          <source>Overview of TASS</source>
          <year>2018</year>
          :
          <article-title>Opinions, health and emotions</article-title>
          . In:
          <article-title>Mart nez-</article-title>
          <string-name>
            <surname>Camara</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Almeida Cruz</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <article-title>D az-</article-title>
          <string-name>
            <surname>Galiano</surname>
            ,
            <given-names>M.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Estevez Velarde</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a-Cumbreras,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Garc</surname>
            a-Vega,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Gutierrez</given-names>
            <surname>Vazquez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Montejo</surname>
          </string-name>
          <string-name>
            <surname>Raez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Montoyo</surname>
          </string-name>
          <string-name>
            <surname>Guijarro</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          , Mun~oz Guillena,
          <string-name>
            <surname>R.</surname>
          </string-name>
          , Piad Mor s,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Villena-Roman</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          . (eds.)
          <source>Proceedings of TASS 2018: Workshop on Semantic Analysis at SEPLN (TASS</source>
          <year>2018</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>2172</volume>
          .
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          , Sevilla, Spain (
          <year>September 2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Mihalcea</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Strapparava</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Making Computers Laugh: Investigations in Automatic Humor Recognition</article-title>
          .
          <source>In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing</source>
          . pp.
          <volume>531</volume>
          {
          <fpage>538</fpage>
          . HLT '
          <volume>05</volume>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computational Linguistics, Stroudsburg, PA, USA (
          <year>2005</year>
          ). https://doi.org/10.3115/1220575.1220642
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dinh</surname>
            ,
            <given-names>E.L.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simpson</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurevych</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>OFAI{UKP at HAHA@IberLEF2019: Predicting the Humorousness of Tweets Using Gaussian Process Preference Learning</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Minsky</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Jokes and the logic of the cognitive unconscious</article-title>
          . Springer (
          <year>1980</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Mulder</surname>
            ,
            <given-names>M.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nijholt</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Humour research: State of art</article-title>
          .
          <source>Technical Report TRCTIT-02-34</source>
          , Centre for Telematics and Information Technology University of Twente, Enschede (
          <year>September 2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>Ortega-Bueno</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pagola</surname>
            ,
            <given-names>J.E.M.</given-names>
          </string-name>
          : UO UPV2 at HAHA 2019:
          <article-title>BiGRU Neural Network Informed with Linguistic Features for Humor Recognition</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Ortiz-Bejar</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tellez</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gra</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moctezuma</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miranda'Jimenez</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>INGEOTEC at IberLEF 2019 Task HaHa</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao,
          <source>Spain (9</source>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Padro</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stanilovsky</surname>
          </string-name>
          , E.:
          <article-title>Freeling 3.0: Towards wider multilinguality</article-title>
          .
          <source>In: Proceedings of the Language Resources and Evaluation Conference (LREC</source>
          <year>2012</year>
          ). ELRA, Istanbul, Turkey (May
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Pennebaker</surname>
            ,
            <given-names>J.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Francis</surname>
            ,
            <given-names>M.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Booth</surname>
          </string-name>
          , R.J.:
          <article-title>Linguistic inquiry and word count: Liwc 2001</article-title>
          . Mahway: Lawrence Erlbaum Associates
          <volume>71</volume>
          (
          <year>2001</year>
          ),
          <year>2001</year>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Pereyra</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tucker</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chorowski</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaiser</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinton</surname>
          </string-name>
          , G.:
          <article-title>Regularizing neural networks by penalizing con dent output distributions</article-title>
          .
          <source>arXiv preprint arXiv:1701.06548</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <surname>Potash</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Romanov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rumshisky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>SemEval-2017 Task 6:# HashtagWars: Learning a sense of humor</article-title>
          .
          <source>In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)</source>
          . pp.
          <volume>49</volume>
          {
          <issue>57</issue>
          (
          <year>2017</year>
          ). https://doi.org/10.18653/v1/s17-
          <fpage>2004</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          44.
          <string-name>
            <surname>Raskin</surname>
          </string-name>
          , V.:
          <article-title>Semantic Mechanisms of Humor</article-title>
          .
          <source>Studies in Linguistics and Philosophy</source>
          , Springer (
          <year>1985</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          45.
          <string-name>
            <surname>Ruch</surname>
          </string-name>
          , W.:
          <article-title>Psychology of humor</article-title>
          .
          <source>The primer of humor research 8</source>
          , 17{
          <fpage>101</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          46.
          <string-name>
            <surname>Ruch</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Attardo</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raskin</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Toward an empirical veri cation of the general theory of verbal humor</article-title>
          .
          <source>HUMOR: the International Journal of Humor Research</source>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          47.
          <string-name>
            <surname>Ruder</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>An overview of multi-task learning in deep neural networks</article-title>
          .
          <source>arXiv preprint arXiv:1706.05098</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          48.
          <string-name>
            <surname>Sennrich</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haddow</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Birch</surname>
            ,
            <given-names>A.:</given-names>
          </string-name>
          <article-title>Neural machine translation of rare words with subword units</article-title>
          .
          <source>arXiv preprint arXiv:1508.07909</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          49. Sjobergh, J.,
          <string-name>
            <surname>Araki</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Recognizing Humor Without Recognizing Meaning</article-title>
          . In: Masulli,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Mitra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Pasi</surname>
          </string-name>
          ,
          <string-name>
            <surname>G</surname>
          </string-name>
          . (eds.)
          <source>WILF. Lecture Notes in Computer Science</source>
          , vol.
          <volume>4578</volume>
          , pp.
          <volume>469</volume>
          {
          <fpage>476</fpage>
          . Springer (
          <year>2007</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>540</fpage>
          -73400- 0 59
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          50.
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>L.N.:</given-names>
          </string-name>
          <article-title>A disciplined approach to neural network hyper-parameters: Part 1{learning rate, batch size, momentum, and weight decay</article-title>
          . arXiv preprint arXiv:
          <year>1803</year>
          .
          <volume>09820</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          51.
          <string-name>
            <surname>Tellez</surname>
            ,
            <given-names>E.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miranda-Jimenez</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gra</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moctezuma</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suarez</surname>
            ,
            <given-names>R.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siordia</surname>
            ,
            <given-names>O.S.:</given-names>
          </string-name>
          <article-title>A simple approach to multilingual polarity classi cation in twitter</article-title>
          .
          <source>Pattern Recognition Letters</source>
          <volume>94</volume>
          ,
          <issue>68</issue>
          {
          <fpage>74</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          52.
          <string-name>
            <surname>Tellez</surname>
            ,
            <given-names>E.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moctezuma</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miranda-Jimenez</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gra</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>An automated text categorization framework based on hyperparameter optimization</article-title>
          .
          <source>KnowledgeBased Systems</source>
          <volume>149</volume>
          ,
          <fpage>110</fpage>
          {
          <fpage>123</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          53.
          <string-name>
            <surname>Van Hee</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lefever</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoste</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Semeval-2018 task 3: Irony detection in english tweets</article-title>
          .
          <source>In: Proceedings of The 12th International Workshop on Semantic Evaluation</source>
          . pp.
          <volume>39</volume>
          {
          <issue>50</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          54.
          <string-name>
            <surname>Vaswani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shazeer</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parmar</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uszkoreit</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomez</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaiser</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polosukhin</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Attention is all you need</article-title>
          .
          <source>In: Advances in neural information processing systems</source>
          . pp.
          <volume>5998</volume>
          {
          <issue>6008</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          55.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manning</surname>
          </string-name>
          , C.D.:
          <article-title>Baselines and bigrams: Simple, good sentiment and topic classi cation</article-title>
          . In:
          <article-title>Proceedings of the 50th annual meeting of the association for computational linguistics: Short papers-volume 2</article-title>
          . pp.
          <volume>90</volume>
          {
          <fpage>94</fpage>
          .
          <article-title>Association for Computational Linguistics (</article-title>
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>