<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>UO IRO: Linguistic informed deep-learning model for irony detection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Reynier Ortega-Bueno</string-name>
          <email>reynier.ortega@cerpamid.co.cu</email>
          <email>reynier.ortega@cerpamid.co.cu Computer Science Department, University of Oriente reynier@uo.edu.cu</email>
          <email>reynier@uo.edu.cu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jose´ E. Medina Pagola</string-name>
          <email>jmedinap@uci.cu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Pattern Recognition and</institution>
          ,
          <addr-line>Data Mining, Santiago de Cuba</addr-line>
          ,
          <country country="CU">Cuba</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Informatics Sciences</institution>
          ,
          <addr-line>Havana</addr-line>
          ,
          <country country="CU">Cuba</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>English. This paper describes our UO IRO system developed for participating in the shared task IronITA, organized within EVALITA: 2018 Workshop. Our approach is based on a deep learning model informed with linguistic knowledge. Specifically, a Convolutional (CNN) and Long Short Term Memory (LSTM) neural network are ensembled, also, the model is informed with linguistics information incorporated through its second to last hidden layer. Results achieved by our system are encouraged, however a more finetuned hyper-parameters setting is required for improving the model's effectiveness.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Italiano. Questo articolo descrive il
nostro sistema UO IRO, sviluppato per la
partecipazione allo shared task IronITA,
presso EVALITA 2018. Il nostro
approccio si basa su un modello di deep
learning con conoscenza linguistica. In
particolare: una Convolutional Neural
Network (CNN) e una Long Short Term
Memory Neural Network (LSTM). Inoltre, il
modello e` arricchito da conoscenza
linguistica, incorporata nel penultimo
hidden layer del modello. Sebbene sia
necessario un miglioramento a grana fine dei
parametri per migliorare le prestazioni del
modello, i risultati ottenuti sono
incoraggianti.</p>
    </sec>
    <sec id="sec-2">
      <title>1 Introduction</title>
      <p>
        Computers interacting with humans through
language, in natural way, continues to be one of the
most salient challenge for Artificial Intelligent
researchers and practitioners. Nowadays, several
basic tasks related to natural language
comprehension have been effectively resolved.
Notwithstanding, slight advances have been archived by
the machines when figurative devices and
creativity are used in language with communicative
purposes. Irony is a peculiar case of figurative
devices frequently used in real life communication.
As human beings, we appeal to irony for
expressing in implicit way a meaning opposite to the
literal sense of the utterance
        <xref ref-type="bibr" rid="ref1 ref24">(Attardo, 2000; Wilson
and Sperber, 1992)</xref>
        . Thus, understanding irony
requires a more complex set of cognitive and
linguistics abilities than literal meaning. Due to its
nature, irony has important implications in
sentiment analysis and other related tasks, which aim
at recognizing feelings and emotions from texts.
Considering that, detecting irony automatically
from textual messages is an important issue to
enhance sentiment analysis and it is still an open
research problem
        <xref ref-type="bibr" rid="ref12 ref13 ref14 ref2 ref21 ref23 ref4">(Gupta and Yang, 2017; Maynard
and Greenwood, 2014; Reyes et al., 2013)</xref>
        .
      </p>
      <p>
        In this work we address the fascinating
problem of automatic irony detection in tweets
written in Italian language. Particularly, we describe
our irony detection system (UO IRO) developed
for participating in IronITA 2018: Irony Detection
in Italian Tweets
        <xref ref-type="bibr" rid="ref18 ref5 ref6">(Cignarella et al., 2018a)</xref>
        . Our
proposed model is based on a deep learning model
informed with linguistic information. Specifically,
a CNN and an attention based LSTM neural
network are ensembled, moreover, the model is
informed with linguistic information incorporated
through its second to last hidden layer. We only
participated in Task A (irony detection). For that,
two constrained runs and two unconstrained runs
were submitted. The official results shown that
our system obtains interesting results. Our best
run was ranked in 12th position out of 17
submissions. The paper is organized as follows. In
Section 2, we introduce our UO IRO system for irony
detection. Experimental results are subsequently
discussed in Section 3. Finally, in Section 4, we
present our conclusions and attractive directions
for future work.
2
      </p>
    </sec>
    <sec id="sec-3">
      <title>UO IRO system for irony detection</title>
      <p>
        The motivation for this work comes from two
directions. In a first place, the recent and
promising results found by some authors
        <xref ref-type="bibr" rid="ref11 ref19 ref20 ref25 ref7 ref7 ref8 ref8">(Deriu and
Cieliebak, 2016; Cimino and Dell’Orletta, 2016;
Gona´lez et al., 2018; Rangwani et al., 2018; Wu
et al., 2018; Peng et al., 2018)</xref>
        in the use of
convolutional networks and recursive networks, also
the hybridization of them for dealing with
figurative language. The second direction is motivated
by the wide use of linguistic features manually
encoded which have showed to be good indicators
for discriminating among ironic and non ironic
content
        <xref ref-type="bibr" rid="ref10 ref13 ref14 ref2 ref2 ref21 ref22 ref9">(Reyes et al., 2012; Reyes and Rosso,
2014; Barbieri et al., 2014; Far´ıas et al., 2016;
Far´ıas et al., 2018)</xref>
        .
      </p>
      <p>Our proposal learns a representation of the
tweets in three ways. In this sense, we propose
to learn a representation based on a recursive
network with the purpose of capturing long
dependencies among terms in the tweets. Moreover, a
representation based on convolutional network is
considered, it tries to encode local and partial
relation between words which are near among
themselves. The last representation is based on
linguistic features which are calculated for the tweets.
After that, all linguistic features previously
computed are concatenated in a one-dimensional
vector and it is passed through a dense hidden layer
which encodes the linguistic knowledge and
includes this information to the model.</p>
      <p>
        Finally, the three neural network based outputs
are combined in a merge layer. The integrated
representations is passed to a dense hidden layer and
the final classification is performed by the output
layer, which use a softmax as activation function
for predicting ironic or not ironic labels. For
training the complete model we use categorical
crossentropy as loss function and the Adam method
        <xref ref-type="bibr" rid="ref13 ref14 ref2 ref21">(Kingma and Ba, 2014)</xref>
        as the optimizer, also, we
use a batch size of 64 and training the model for 20
epochs. Our proposal was implemented using the
Keras Framework1. The architecture of the UO
IRO is shown in Figure 1 and described below.
1https://keras.io/
In the preprocessing step, the tweets are cleaned.
Firstly, the emoticons, urls, hashtags, mentions
and twitter-specific tokens (RT for retweet and
FAV for favorite) are recognized and replaced by a
corresponding wild-card which encodes the
meaning of these special words. Afterwards, tweets are
morphologically analyzed by FreeLing
        <xref ref-type="bibr" rid="ref17 ref22">(Padro´ and
Stanilovsky, 2012)</xref>
        . In this way, for each resulting
token, its lemma is assigned. Then, the words in
the tweets are represented as vectors using a word
embedding model. In this work we use the Italian
pre-trained vectors2 public available
        <xref ref-type="bibr" rid="ref4">(Bojanowski
et al., 2017)</xref>
        .
We use a model that consists in a Bidirectional
LSTM neural network (Bi-LSTM) at the word
level. Each time step t, the Bi-LSTM gets as
input a word vector wt with syntactic and semantic
information known as word embedding. The idea
behind this Bi-LSTM is to capture long-range and
backwards dependencies in the tweets. Afterward,
an attention layer is applied over each hidden state
ht. The attention weights are learned using the
concatenation of the current hidden state ht of the
Bi-LSTM and the past hidden state st 1. The goal
of this layer is then to derive a context vector ct
that captures relevant information for feeding it as
input to the next level. Finally, a LSTM layer is
stacked at the top. This network at each time step
receives the context vector ct which is propagated
2https://s3-us-west-1.amazonaws.com/fasttextvectors/wiki.it.zip
until the final hidden state sTx . This vector (sTx )
can be considered as a high level representation of
the tweet. For more details, please see
(OrtegaBueno et al., 2018).
2.3
      </p>
      <sec id="sec-3-1">
        <title>Convolutional Neural Network</title>
        <p>We use a CNN model that consists in 3 pairs of
convolutional layers and pooling layers in this
architecture. Filters of size three, four and five were
defined for the convolutional layers. In case of
pooling layer, the maxpooling strategy was used.
We also use the Rectified Linear Unit (ReLU),
Normalization and Dropout methods to improve
the accuracy and generalizability of the model.
2.4</p>
      </sec>
      <sec id="sec-3-2">
        <title>Linguistic Features</title>
        <p>In our work, we explored some linguistic features
useful for irony detection in texts which can be
grouped in three main categories: Stylistic,
Structural and Content, and Polarity Contrast. We
define a set of features distributed as follows:</p>
      </sec>
      <sec id="sec-3-3">
        <title>Stylistic Features</title>
        <p>Length: Three different features were
considered: number of words, number of
characters, and the means of the length of the words
in the tweet.</p>
        <p>Hashtags: The amount of hashtags.</p>
        <p>Urls: The number of url.</p>
        <p>Emoticons: The number of emoticons.
Exclamations: Occurrences of exclamation
marks.</p>
        <p>Emphasized Words: Four different features
were considered: word emphasized through
repetition, capitalization, character flooding
and exclamation marks.</p>
        <p>Punctuation Marks: The frequency of dots,
commas, semicolons, and question marks.
Quotations: The number of expressions
between quotation marks.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Structural and Content Features</title>
        <p>
          Antonyms: This feature considers the
number of pairs of antonyms existing in the
tweet. WordNet
          <xref ref-type="bibr" rid="ref15">(Miller, 1995)</xref>
          antonym
relation was used for that.
        </p>
        <p>Lexical Ambiguity: Three different features
were considered using WordNet: the first one
is the mean of the number of synsets of each
word. The second one is the greatest number
of synsets that has a single word. The last is
the difference between the number of synsets
of the word with major number of synsets and
the average number of synsets.</p>
        <p>Domain Ambiguity: Three different features
were considered using WordNet: the first one
is the mean of the number of domains of
each word. The second one is the greatest
number of domains that a single word has
in the tweet. The last one is the difference
between the number of domains of the word
with major number of domains and the
average number of domains. It is important to
clarify that the resources WordNet Domains3
and SUMO4 were separately used.</p>
        <p>Persons: This feature tries to capture verbs
conjugated in the first, second, third person
and nouns and adjectives which agree with
such conjugations.</p>
        <p>Tenses: This feature tries to capture the
different verbal tenses used in the tweet.
Questions-answers: Occurrences of
questions and answers pattern in the tweet.
Part of Speech: The number of nouns, verbs,
adverbs and adjectives in the tweet are
quantified.</p>
        <p>Negation: The amount of negation words.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Polarity Contrast Features</title>
        <p>
          With the purpose of capturing some types of
explicit polarity contrast we consider the set of
features proposed in
          <xref ref-type="bibr" rid="ref18">(Pen˜a et al., 2018)</xref>
          . The
Italian polarity lexicon
          <xref ref-type="bibr" rid="ref23 ref3">(Basile and Nissim, 2013)</xref>
          was
used to determine the contrast between different
parts of the tweet.
        </p>
        <p>WordPolarityContrast: It is the polarity
difference between the most positive and the
most negative word in the tweet. This
feature, also consider the distance, in terms of
tokens, between the words.</p>
        <p>EmotiTextPolarityContrast: It is the
polarity contrast between the emoticons and the
words in the tweet.</p>
        <p>AntecedentConsequentPolarityContrast:
This considers the polarity contrast between
two parts of the tweet, when it is split
by a delimiter. In this case, adverbs and
punctuation marks were used as delimiters.
MeanPolarityPhrase: It is the mean of the
polarities of the words that belong to quotes.
PolarityStandardDeviation: It is the standard
deviation of the polarities of the words that
3http://wndomains.fbk.eu/hierarchy.html
4http://www.adampease.org/OP/
belong to quotes.</p>
        <p>PresentPastPolarityContrast: It computes
the polarity contrast between the parts of the
tweet written in present and past tense.</p>
        <p>SkipGramPolarityRate:It computes the rate
among skip-grams with polarity contrast and
all valid skip-grams. The valid skip-grams
are those composed by two words (nouns,
adjectives, verbs, adverbs) with skip=1.
The skip-grams with polarity opposition are
those that match with the patterns
positivenegative, positive-neutral, negative-neutral,
and vise versa.</p>
        <p>CapitalLetterTextPolarityContrast: It
computes the polarity contrast between
capitalized words and the rest of the words in the
tweets.
3</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiments and Results</title>
      <p>In this section we show the results of the
proposed model in the shared task of “Irony
Detection” and discuss them. In a first experiment we
analyze the performance of four variants of our
model using 10 fold cross-validation strategy on
the training set. Also, each variant was running
in unconstrained and constrained setting,
respectively. In Table 1, we summarize the obtained
results in terms of F1 measure macro averaged
(F1-AVG). Specifically, we rely on the macro for
preventing systems biased towards the most
populated classes.</p>
      <p>
        For the run1-c and run1-u (CNN-LSTM) we
only combine the representation obtained by the
attention based LSTM model with the CNN
model, in these runs, no linguistic knowledge was
considered. Run2-c and run2-u
(CNN-LSTMSVM) are a modification of the CNN-LSTM
model, in this case we change the softmax layer
at the output of the model and use a Linear
Support Vector Machine (SVM) with default
parameters as final classifier. Run3-c and run3-u
(CNNLSTM-LING) represent the original introduced
model without any variations. Finally, for run4-c
and run4-u (CNN-LSTM-LING-SVM) we change
the softmax layer by a linear SVM as final
classifier. For unconstrained runs, we include the ironic
tweets provided by the corpus Twittiro`
        <xref ref-type="bibr" rid="ref5 ref6">(Cignarella
et al., 2018b)</xref>
        , to the official training set releases
by the IronITA organizers.
      </p>
      <p>Analyzing Table 1, several observations can be
made. Firstly, unconstrained runs achieved
better results than constrained ones. These results
reveal that introducing more ironic examples
improves the performance of the UO IRO. Secondly,
the results achieved with the variants that consider
the linguistic knowledge (run3-c, run4-c, run3-u
and run4-u) obtain an increase in the effectiveness.
With respect to the strategy used for the final
classification of the tweets, generally, those variants
that use SVM obtain a slight drop in the AVG-F1.</p>
      <p>Regarding the official results, we submitted
four runs, two for constrained setting (RUN1-c
and RUN2-c) and two for unconstrained setting
(RUN3-u and RUN4-u). For the unconstrained
variants of the UO IRO, the tweets provided by
the corpus Twittiro` were also used with the
training set. Taking into account the results of the Table
1 we select to CNN-LSTM-LING (RUN1-c and
RUN3-u) and CNN-LSTM-LING-SVM (RUN2-c
and RUN4-u) as the most promising variants of the
model for evaluating in the official test set.</p>
      <p>As can be observed in Table 2, our four runs
were ranked 12th, 13th, 14th and 15th from a
total of 17 submissions. The unconstrained variants
of the UO IRO achieved better results than
constrained ones. Contrary to the results shown in the
Table 1, the runs that use SVM as final
classification strategy (RUN2-c and RUN4-u) were better
ranked than the other ones. We think that this
behavior may be caused by softmax classifiers (last
layer of the UO IRO), those are more sensitive to
the over-fitting problem than Support Vector
Machines. Notice that, in all cases our model surpass
the two baseline methods established by the
organizers.
In this paper we presented the UO IRO system
for the task of Irony Detection in Italian Tweets
(IronITA) at EVALITA 2018. We participated in
the “Irony classification’ subtask and our best
submission ranked 12nd out of 17. Our proposal
combines attention-based Long Short-Term Memory
Network, Convolutional Neural Network, and
linguistics information which is incorporated through
the second to last hidden layer of the model. The
results shown that the consideration of linguistic
features in combination with the deep
representation learned by the neural network models
obtain better effectiveness based on F1-measure.
Results achieved by our system are interesting,
however a more fine-tuned hyper-parameters setting is
required for improving the model’s effectiveness.
We think that including the linguistic features of
irony into the firsts layers of the model could be
a way to increase the effectiveness. We would
like to explore this approach in the future work.
Also, we plan to analyze how affective
information flows through the tweets, and how it impacts
on the irony realization.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>Salvatore</given-names>
            <surname>Attardo</surname>
          </string-name>
          .
          <year>2000</year>
          .
          <article-title>Irony as relevant inappropriateness</article-title>
          .
          <source>Journal of Pragmatics</source>
          ,
          <volume>32</volume>
          (
          <issue>6</issue>
          ):
          <fpage>793</fpage>
          -
          <lpage>826</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>Francesco</given-names>
            <surname>Barbieri</surname>
          </string-name>
          , Horacio Saggion, and
          <string-name>
            <given-names>Francesco</given-names>
            <surname>Ronzano</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Modelling Sarcasm in Twitter, a Novel Approach</article-title>
          .
          <source>In Proceedings ofthe 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</source>
          , pages
          <fpage>136</fpage>
          -
          <lpage>141</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Valerio</given-names>
            <surname>Basile</surname>
          </string-name>
          and
          <string-name>
            <given-names>Malvina</given-names>
            <surname>Nissim</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Sentiment analysis on Italian tweets</article-title>
          .
          <source>In 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis</source>
          , pages
          <fpage>100</fpage>
          -
          <lpage>107</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Piotr</given-names>
            <surname>Bojanowski</surname>
          </string-name>
          , Edouard Grave, Armand Joulin, and
          <string-name>
            <given-names>Tomas</given-names>
            <surname>Mikolov</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Enriching Word Vectors with Subword Information. Transactions of the ACL</article-title>
          .,
          <volume>5</volume>
          :
          <fpage>135</fpage>
          -
          <lpage>146</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Alessandra</given-names>
            <surname>Cignarella</surname>
          </string-name>
          , Frenda Simona, Basile Valerio, Bosco Cristina, Patti Viviana, and
          <string-name>
            <given-names>Rosso</given-names>
            <surname>Paolo</surname>
          </string-name>
          . 2018a.
          <article-title>Overview of the EVALITA 2018 Task on Irony Detection in Italian Tweets (IronITA)</article-title>
          . In Tommaso Caselli, Nicole Novielli, Viviana Patti, and Paolo Rosso, editors,
          <source>Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA</source>
          <year>2018</year>
          ), Turin, Italy. CEUR.org.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Alessandra</given-names>
            <surname>Teresa</surname>
          </string-name>
          <string-name>
            <surname>Cignarella</surname>
          </string-name>
          , Cristina Bosco, Viviana Patti, and
          <string-name>
            <given-names>Mirko</given-names>
            <surname>Lai</surname>
          </string-name>
          . 2018b.
          <article-title>Application and Analysis of a Multi-layered Scheme for Irony on the Italian Twitter Corpus TWITTIRO´</article-title>
          .
          <source>In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC</source>
          <year>2018</year>
          ), pages
          <fpage>4204</fpage>
          -
          <lpage>4211</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Cimino and Felice Dell'Orletta</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Tandem LSTM-SVM Approach for Sentiment Analysis</article-title>
          . In CLiC-it/EVALITA.
          <year>2016</year>
          , pages
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . CEUR-WS.org.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>Jan</given-names>
            <surname>Deriu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mark</given-names>
            <surname>Cieliebak</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Sentiment Analysis using Convolutional Neural Networks with Multi-Task Training and Distant Supervision on Italian Tweets</article-title>
          . In CLiC-it/EVALITA.
          <year>2016</year>
          , pages
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . CEUR-WS.org.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>Delia</given-names>
            <surname>Irazu</surname>
          </string-name>
          ´
          <article-title>Hernan´dez Far´ıas, Viviana Patti</article-title>
          , and
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Rosso</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Irony Detection in Twitter</article-title>
          .
          <source>ACM Transactions on Internet Technology</source>
          ,
          <volume>16</volume>
          (
          <issue>3</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Delia-Irazu</surname>
          </string-name>
          ´
          <article-title>Herna´dez Far´ıas, Viviana Patti</article-title>
          , and
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Rosso</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>ValenTO at SemEval-2018 Task 3 : Exploring the Role of Affective Content for Detecting Irony in English Tweets</article-title>
          .
          <source>In Proceedings ofthe 12th International Workshop on Semantic Evaluation (SemEval-2018)</source>
          , pages
          <fpage>643</fpage>
          -
          <lpage>648</lpage>
          . Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          Jose´ Angel Gona´lez, Llu´
          <string-name>
            <surname>ıs-F. Hurtado</surname>
            , and
            <given-names>Ferran</given-names>
          </string-name>
          <string-name>
            <surname>Pla</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>ELiRF-UPV at SemEval-2018 Tasks 1 and 3 : Affect and Irony Detection in Tweets</article-title>
          .
          <source>In Proceedings ofthe 12th International Workshop on Semantic Evaluation (SemEval-2018)</source>
          , pages
          <fpage>565</fpage>
          -
          <lpage>569</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>Raj</given-names>
            <surname>Kumar</surname>
          </string-name>
          Gupta and
          <string-name>
            <given-names>Yinping</given-names>
            <surname>Yang</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>CrystalNest at SemEval-2017 Task 4 : Using Sarcasm Detection for Enhancing Sentiment Classification and Quantification</article-title>
          .
          <source>In Proceedings ofthe 11th International Workshop on Semantic Evaluations (SemEval-2017)</source>
          , pages
          <fpage>626</fpage>
          -
          <lpage>633</lpage>
          , Vancouver, Canada. Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Diederik P Kingma and Jimmy Ba</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Adam: A method for stochastic optimization</article-title>
          .
          <source>arXiv preprint arXiv:1412</source>
          .
          <fpage>6980</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>Diana</given-names>
            <surname>Maynard and Mark A Greenwood</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>Who cares about sarcastic tweets ? Investigating the impact of sarcasm on sentiment analysis</article-title>
          .
          <source>In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)</source>
          .
          <source>European Language Resources Association.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>George A</given-names>
            <surname>Miller</surname>
          </string-name>
          .
          <year>1995</year>
          .
          <article-title>WordNet: a lexical database for English</article-title>
          .
          <source>Communications of the ACM</source>
          ,
          <volume>38</volume>
          (
          <issue>11</issue>
          ):
          <fpage>39</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <given-names>Reynier</given-names>
            <surname>Ortega-Bueno</surname>
          </string-name>
          , Carlos E Mu, and
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Rosso</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>UO UPV : Deep Linguistic Humor Detection in Spanish Social Media</article-title>
          .
          <source>In Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval</source>
          <year>2018</year>
          ), pages
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <article-title>Llu´ıs Padro´</article-title>
          and
          <string-name>
            <given-names>Evgeny</given-names>
            <surname>Stanilovsky</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>FreeLing 3.0: Towards Wider Multilinguality</article-title>
          .
          <source>In Proceedings of the (LREC</source>
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <given-names>Anakarla</given-names>
            <surname>Sotolongo</surname>
          </string-name>
          <article-title>Pen˜a, Leticia Arco Garc´ıa, and Adria´n Rodr´ıguez Dosina</article-title>
          .
          <year>2018</year>
          .
          <article-title>Deteccio´n de iron´ıa en textos cortos enfocada a la miner´ıa de opinio´n. In IV Conferencia Internacional en Ciencias Computacionales e Informa´ticas (CICCI'</article-title>
          <year>2018</year>
          ), number
          <fpage>1</fpage>
          -10, Havana, Cuba.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <given-names>Bo</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Jin</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Xuejie</given-names>
            <surname>Zhang</surname>
          </string-name>
          .
          <year>2018</year>
          . YNUHPCC at SemEval
          <article-title>-2018 Task 3 : Ensemble Neural Network Models for Irony Detection on Twitter</article-title>
          .
          <source>In 622 Proceedings ofthe 12th International Workshop on Semantic Evaluation (SemEval-2018)</source>
          , pages
          <fpage>622</fpage>
          -
          <lpage>627</lpage>
          . Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <given-names>Harsh</given-names>
            <surname>Rangwani</surname>
          </string-name>
          , Devang Kulshreshtha, and Anil Kumar Singh.
          <year>2018</year>
          .
          <article-title>NLPRL-IITBHU at SemEval2018 Task 3 : Combining Linguistic Features and Emoji Pre-trained CNN for Irony Detection in Tweets</article-title>
          .
          <source>In Proceedings ofthe 12th International Workshop on Semantic Evaluation (SemEval-2018)</source>
          , pages
          <fpage>638</fpage>
          -
          <lpage>642</lpage>
          . Association for Computational Linguistics.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>Antonio</given-names>
            <surname>Reyes</surname>
          </string-name>
          and
          <string-name>
            <given-names>Paolo</given-names>
            <surname>Rosso</surname>
          </string-name>
          .
          <year>2014</year>
          .
          <article-title>On the difficulty of automatically detecting irony: beyond a simple case of negation</article-title>
          .
          <source>Knowledge and Information Systems</source>
          ,
          <volume>40</volume>
          (
          <issue>3</issue>
          ):
          <fpage>595</fpage>
          -
          <lpage>614</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <given-names>Antonio</given-names>
            <surname>Reyes</surname>
          </string-name>
          , Paolo Rosso, and
          <string-name>
            <given-names>Davide</given-names>
            <surname>Buscaldi</surname>
          </string-name>
          .
          <year>2012</year>
          .
          <article-title>From humor recognition to irony detection: The figurative language of social media</article-title>
          .
          <source>Data and Knowledge Engineering</source>
          ,
          <volume>74</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <given-names>Antonio</given-names>
            <surname>Reyes</surname>
          </string-name>
          , Paolo Rosso, and
          <string-name>
            <given-names>Tony</given-names>
            <surname>Veale</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>A multidimensional approach for detecting irony in Twitter</article-title>
          .
          <source>Language Resources and Evaluation</source>
          ,
          <volume>47</volume>
          (
          <issue>1</issue>
          ):
          <fpage>239</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <given-names>Deirdre</given-names>
            <surname>Wilson</surname>
          </string-name>
          and
          <string-name>
            <given-names>Dan</given-names>
            <surname>Sperber</surname>
          </string-name>
          .
          <year>1992</year>
          .
          <article-title>On verbal irony</article-title>
          .
          <source>Lingua</source>
          ,
          <volume>87</volume>
          (
          <issue>1</issue>
          ):
          <fpage>53</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <given-names>Chuhan</given-names>
            <surname>Wu</surname>
          </string-name>
          , Fangzhao Wu, Sixing Wu, Junxin Liu, Zhigang Yuan, and
          <string-name>
            <given-names>Yongfeng</given-names>
            <surname>Huang</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>THU NGN at SemEval-2018 Task 3 : Tweet Irony Detection with Densely Connected LSTM and Multitask Learning</article-title>
          .
          <source>In Proceedings ofthe 12th International Workshop on Semantic Evaluation (SemEval2018)</source>
          , pages
          <fpage>51</fpage>
          -
          <lpage>56</lpage>
          . Association for Computational Linguistics.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>