<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>SSN_NLP@IDAT-FIRE-2019 : Irony Detection in Arabic Tweets using Deep Learning and Features-based Approaches</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>S. Kayalvizhi</string-name>
          <email>kayalvizhis@ssn.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>D. Thenmozhi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>B. Senthil Kumar</string-name>
          <email>senthil@ssn.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chandrabose Aravindan</string-name>
          <email>aravindanc@ssn.edu.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of CSE, SSN College of Engineering</institution>
          ,
          <addr-line>Chennai</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <abstract>
        <p>The disparity between the statement and its intended meaning is referred to as irony. Detecting this disparity in Arabic tweets is a challenging task. We presented three approaches namely deep learning using transformers, deep learning using Recurrent Neural Networks (RNN) and a features-based approach for detecting the irony in Arabic tweets. Among these approaches, the deep learning approach using transformers scores better F1-score of 0.816 than the deep learning approach using RNN which has 0.719 and the features-based approach which scores 0.709 on the IDAT@FIRE-2019 dataset.</p>
      </abstract>
      <kwd-group>
        <kwd>Deep neural network</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Irony Detection</kwd>
        <kwd>LSTM</kwd>
        <kwd>Transformers</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Irony is a complex linguistic phenomenon widely studied in philosophy and
linguistics. Irony can be defined as an incompatibility between the literal meaning
and its conveyed meaning. Irony detection has gained relevance recently, due
to its importance in various NLP applications such as sentiment analysis, hate
speech detection, author profiling, fake news detection, and crisis management.
The irony is detected in various language namely Italian [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], Czech [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], Spanish
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], French [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and other languages [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Various methods have been used to
detect irony in English tweets. The methods include SVM classifier [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], LSTM and
word embeddings architecture [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], ensemble method of word-based and
character based LSTM ensemble method of Logistic Regression(LR) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and SVM [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ],
etc. In Arabic, sarcasm, a special form of irony has been detected by creating a
word cloud of tweets and classifying the words by weka tool [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and deep neural
networks [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] have also been used to classify the sarcasm. A survey has also been
done on state of art of irony detection in Arabic language [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
IDAT@FIRE2019 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] aims at detecting irony in Arabic tweets. Given a tweet, systems have
to classify it as either ironic or not ironic.
The dataset is from IDAT@FIRE-2019. The dataset includes short documents
taken from Twitter related to different political issues and events related to
the Middle East that hold during the years 2011 to 2018. The training set
contains 4024 instances with two classes namely ironic and non-ironic. The test
set contains 1006 instances.
3
      </p>
    </sec>
    <sec id="sec-2">
      <title>Proposed Methodology</title>
      <p>We propose two deep learning approaches and a features-based approach for
detecting irony in Arabic tweets.
3.1</p>
      <p>
        Deep learning approach using transformers architecture
In this method, Tensorflow based bi-directional transformers [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] are made use.
Bi-directional Encoder Representation from Transformers (BERT) makes use
of transformers. Transformer is an architecture that converts input sequence
to output sequence, based on self-attention without using sequence RNN or
convolution. Attention mechanism is the one that decides which part of the
sentence is important. BERT uses masking mechanism in which 15% of input
sequence are masked and then first those masked words by running the sequence.
      </p>
      <p>In general, BERT predicts whether the given two sentence are adjacent
sentences (i.e. Is the next sentence are not). The architecture includes the
parameters such as number of layers, hidden nodes and attention heads. For binary
classification, BERT is fine tuned by adding extra parameters to a classification
layer W–&gt;(K*H), where ‘K’ is the number of classifier labels and ‘H’ is the
number of final hidden states which is at the top. The input data for the BERT
model is prepared by removing the extra lines and special characters. The
preprocessed data is given to the model for training. The output of the classifier for
the test data is submitted as SSN_NLP Run 1.
3.2</p>
      <p>
        Deep learning approach using Recurrent Neural Network
architecture
In this method, sequence to sequence model [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ] is made use. In general,
seq-toseq model learns the long and short term dependencies. In our work, the model is
used to predict the classes by learning the dependencies. For implementation we
have utilized the Tensorflow based tutorial code by Neural Machine Translation1.
      </p>
      <p>
        Initially, the given training set is split into training and testing set and then
analysed. During that analysis, training set had 3500 instances and testing had
524 instances. The tweets along with their classes are given to the model for
1 https://github.com/tensorflow/nmt
training. The tweets are encoded into an intermediate representation which are
then filtered by deciding whether they are important or not by attention and
then decoded. Different models are constructed by varying the recurrent units
and attention. In our work, two attentions namely Scaled Luong [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and Normed
Bahdanau[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] and recurrent units namely LSTM, GRU and GNMT are made
use. Then, the class labels are predicted and projected as output by projection
layer where loss is also calculated and reduced by back propagation. The result
of analysis is shown in table below.
      </p>
      <p>Recurrent Unit</p>
      <p>LSTM
LSTM
GRU
GRU
GNMT
GNMT</p>
      <p>Attention</p>
      <p>Scaled Luong
Normed Bahdanau</p>
      <p>Scaled Luong
Normed Bahdanau</p>
      <p>Scaled Luong
Normed Bahdanau
From the Table 1, it is clear that the model with GRU as recurrent unit and Scaled
Luong attention performs well than the other architectures and it is submitted as
SSN_NLP Run 2.
3.3</p>
      <p>
        Features-based Approach
In this method, the full corpus given are pre-processed and vectorized using
AraVec-1.0 [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. It is an open source pre-trained word embedding for Arabic
content. There are many models which vary by the number of vocabulary and
techniques. We have used Twitter t-CBOW with 204,448 vocabulary size and
300 dimension for our work.
      </p>
      <p>Classifiers</p>
      <p>After vectorization, every tweet is represented as a single vector by either
considering the minimum vector of 300 dimension, maximum of 300 dimension and
minimum-maximum vectors of 600 dimension. Then the vectors are classified
using different machine learning classifiers namely Multi-Layer Perceptron(MLP),
Support Vector Classifier, Decision Tree, KNN, Random Forest, Adaboost and
Quadratic DA.</p>
      <p>Different traditional classifiers are trained by considering various forms of vector
representation. Three vector representations namely minimum, maximum and
minimum-maximum are considered whose results are shown in the Table 2. From
the table, MLP with minimum vector seems to perform better than other
classifiers with different vector representation. Thus, it was submitted as SSN_NLP
Run 3.
4</p>
    </sec>
    <sec id="sec-3">
      <title>Results</title>
      <p>Approaches</p>
      <p>
        Deep learning approach using transformers architecture
Deep learning approach using Recurrent Neural Network architecture
Features-based Approach
F1-score
0.816
0.793
0.709
Word embeddings have been used in the deep learning approach for detecting the
irony in tweets. The significant phrases or words present in the tweets of ironic
class may fall under the non-ironic class in the training set or vice versa, which
may lead to misclassification [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In the features based approach, we have used
pre-trained word embeddings. Due to out-of-vocabulary problem, many words
have been skipped which may be a reason for misclassification.
6
      </p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>Irony detection in Arabic tweets have gained a good relevance nowadays in
which the incompatibility between the comment and considered statement can
Teams and Runs
SSN_NLP Run 1
SSN_NLP Run 2
SSN_NLP Run 3</p>
      <p>YOLO Run 1
chiyu_zhang_ubc Run 2</p>
      <p>BENHA Run 3</p>
      <p>rgcl Run1
Ali Allaith Run1</p>
      <p>PITS Run1
Tha3aroon-ft Run1
kinmokusu Run1
Amrita_CEN Run1
be found. We have presented three different approaches in which a deep learning
approach using transformers have achieved 0.816 as F1-score for the test data
which is better than deep learning approach of using LSTM and features-based
approach of MLP classifiers, with 0.793 and 0.709 as F1-scores respectively.
The performance can further be improved by changing the value of parameters
in transformers architecture.The probability scores of different deep learning
architectures and traditional classifiers can also be ensembled to improve the
performance.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgement</title>
      <p>We would like to thank the Science and Engineering Research Board
(SERB), Department of Science and Technology for funding the GPU system
(EEQ/2018/000262) where this work is being carried out.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Al-Ghadhban</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alnkhilan</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tatwany</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alrazgan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Arabic sarcasm detection in twitter</article-title>
          .
          <source>In: 2017 International Conference on Engineering &amp; MIS (ICEMIS)</source>
          . pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . IEEE (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bahdanau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cho</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.:</given-names>
          </string-name>
          <article-title>Neural machine translation by jointly learning to align and translate</article-title>
          .
          <source>arXiv preprint arXiv:1409.0473</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Barbieri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ronzano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saggion</surname>
          </string-name>
          , H.:
          <article-title>Italian irony detection in twitter: a first approach</article-title>
          .
          <source>In: The First Italian Conference on Computational Linguistics CLiC-it</source>
          . vol.
          <volume>28</volume>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Baziotis</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikolaos</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Papalampidi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolovou</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paraskevopoulos</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ellinas</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Potamianos</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>NTUA-SLP at SemEval-2018 task 3: Tracking ironic tweets using ensembles of word and character level attentive RNNs</article-title>
          .
          <source>In: Proceedings of The 12th International Workshop on Semantic Evaluation</source>
          . pp.
          <fpage>613</fpage>
          -
          <lpage>621</lpage>
          . Association for Computational Linguistics„ New Orleans, Louisiana, (jun
          <year>2018</year>
          ). https://doi.org/10.18653/v1/
          <fpage>S18</fpage>
          -1100, https://www.aclweb.org/anthology/S18-1100.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Devlin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>M.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toutanova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Bert: Pre-training of deep bidirectional transformers for language understanding</article-title>
          . arXiv preprint arXiv:
          <year>1810</year>
          .
          <volume>04805</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ghanem</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Karoui</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Benamara</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moriceau</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : IDAT@
          <article-title>FIRE2019: Overview of the Track on Irony Detection in Arabic Tweets</article-title>
          . In: Metha P.,
          <string-name>
            <surname>Rosso</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Majumder</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mitra</surname>
            <given-names>M</given-names>
          </string-name>
          . (Eds.)
          <article-title>Working Notes of the Forum for Information Retrieval Evaluation (FIRE 2019)</article-title>
          . CEUR Workshop Proceedings. In: CEUR-WS.org, Kolkata, India, December
          <volume>12</volume>
          -
          <fpage>15</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>González</given-names>
            <surname>Fuente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Prieto</surname>
          </string-name>
          <string-name>
            <surname>Vives</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Noveck</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.A.</surname>
          </string-name>
          :
          <article-title>A fine-grained analysis of the acoustic cues involved in verbal irony recognition in french</article-title>
          . Barnes J,
          <string-name>
            <surname>Brugos</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shattuck-Hufnagel</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veilleux</surname>
            <given-names>N</given-names>
          </string-name>
          , editors.
          <source>Speech Prosody</source>
          <year>2016</year>
          ; 2016 May 31- June 3; Boston, United States of America.[place unknown]: International Speech Communication Association;
          <year>2016</year>
          . p.
          <fpage>902</fpage>
          -
          <lpage>6</lpage>
          . DOI:
          <volume>10</volume>
          .21437/SpeechProsody. 2016-
          <volume>185</volume>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Karoui</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zitoune</surname>
            ,
            <given-names>F.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moriceau</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Soukhria: Towards an irony detection system for arabic in social media</article-title>
          .
          <source>Procedia Computer Science</source>
          <volume>117</volume>
          ,
          <fpage>161</fpage>
          -
          <lpage>168</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Luong</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brevdo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>Neural machine translation (seq2seq) tutorial</article-title>
          . https://github.com/tensorflow/nmt (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Luong</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pham</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manning</surname>
          </string-name>
          , C.D.:
          <article-title>Effective approaches to attention-based neural machine translation</article-title>
          .
          <source>arXiv preprint arXiv:1508.04025</source>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Mohammad</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bravo-Marquez</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Emotion intensities in tweets</article-title>
          .
          <source>arXiv preprint arXiv:1708.03696</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>T.Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Improving lexical choice in neural machine translation</article-title>
          .
          <source>arXiv preprint arXiv:1710.01329</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Ortega-Bueno</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rangel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hernández Farıas</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montes-y Gómez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Medina</given-names>
            <surname>Pagola</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.E.</surname>
          </string-name>
          :
          <article-title>Overview of the task on irony detection in spanish variants</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ),
          <article-title>co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN</article-title>
          <year>2019</year>
          ).
          <article-title>CEUR-WS. org (</article-title>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ptáček</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Habernal</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hong</surname>
          </string-name>
          , J.:
          <article-title>Sarcasm detection on czech and english twitter</article-title>
          .
          <source>In: Proceedings of COLING</source>
          <year>2014</year>
          ,
          <source>the 25th International Conference on Computational Linguistics: Technical Papers</source>
          . pp.
          <fpage>213</fpage>
          -
          <lpage>223</lpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Rohanian</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taslimipoor</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Evans</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mitkov</surname>
          </string-name>
          , R.: Wlv at semeval
          <article-title>-2018 task 3: Dissecting tweets in search of irony</article-title>
          .
          <source>In: Proceedings of The 12th International Workshop on Semantic Evaluation</source>
          . pp.
          <fpage>553</fpage>
          -
          <lpage>559</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rangel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Farías</surname>
            ,
            <given-names>I.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cagnina</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaghouani</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Charfi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>A survey on author profiling, deception, and irony detection for the arabic language</article-title>
          .
          <source>Language and Linguistics Compass</source>
          <volume>12</volume>
          (
          <issue>4</issue>
          ),
          <year>e12275</year>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Soliman</surname>
            ,
            <given-names>A.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eissa</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>El-Beltagy</surname>
            ,
            <given-names>S.R.</given-names>
          </string-name>
          :
          <article-title>Aravec: A set of arabic word embedding models for use in arabic nlp</article-title>
          .
          <source>Procedia Computer Science</source>
          <volume>117</volume>
          ,
          <fpage>256</fpage>
          -
          <lpage>265</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Thu_ngn at semeval-2018 task 3: Tweet irony detection with densely connected lstm and multi-task learning</article-title>
          .
          <source>In: Proceedings of The 12th International Workshop on Semantic Evaluation</source>
          . pp.
          <fpage>51</fpage>
          -
          <lpage>56</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Xiong</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.</surname>
          </string-name>
          :
          <article-title>Sarcasm detection with selfmatching networks and low-rank bilinear pooling</article-title>
          .
          <source>In: The World Wide Web Conference</source>
          . pp.
          <fpage>2115</fpage>
          -
          <lpage>2124</lpage>
          . WWW '19,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, USA (
          <year>2019</year>
          ). https://doi.org/10.1145/3308558.3313735
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>