<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>UO-CERPAMID at IroSvA: Impostor Method Adaptation for Irony Detection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>IroSvA@IberLEF</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Pattern Recognition and Data Mining, Universidad de Oriente</institution>
          ,
          <country country="CU">Cuba</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Daniel Castro</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universidad de Oriente</institution>
          ,
          <country country="CU">Cuba</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>315</fpage>
      <lpage>321</lpage>
      <abstract>
        <p>Irony in text allows expressing implicit negative opinions, using a gurative and humorous language. IroSvA is a challenging task proposed this year that allows the evaluation of Irony Detection algorithms in the analysis of short texts for three Spanish language variants (Cuban, Spain and Mexican). Our proposal focuses on the study of three di erent representations of textual information and similarity measures using a weighted combination of these representations. We use an adaptation of the impostors method for classifying texts in Ironic or Non ironic. We consider the non-ironic texts of the training dataset as the list of impostors. The results achieved are encouraging and the best were obtained for the cuban variant.</p>
      </abstract>
      <kwd-group>
        <kwd>Irony detection</kwd>
        <kwd>Impostor method</kwd>
        <kwd>Text similarity</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Irony is a fundamental rhetorical device. It is a uniquely human mode of
communication, curious in that the speaker says something other than what he or
she intends [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Irony is an active part in the speech of users on the web, when
it comes to expressing their opinions (blogs, forums, social networks, specialized
sites). Hence the importance of its computational detection for the analysis of
data by companies with access to data generated by these or entities with an
interest in sentiment analysis, among others. The detection of irony is de ned
as \a set of characteristics and techniques that allow you to decide whether a
text is ironic or not" [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Another de nition is \Irony detection is an interesting
machine learning problem, because, in contrast to most text classi cations tasks,
it requires a semantics that cannot be inferred directly from word counts over
documents alone" [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        As such, modeling irony has a large potential for applications in various research
areas, including text mining, author pro ling, detecting online harassment and,
perhaps one of the most investigated applications at present, automatic
sentiment analysis. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Therefore, detecting irony involves the most challenging areas of natural
language processing. Research in user-generated content is a very challenging task.
Despite the social media texts convey an invaluable source of information, they
are di cult to process because they are noisy, informal, with little context, and
plenty of grammatical mistakes [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        In the task of detecting irony, although in its beginnings, the English language
[
        <xref ref-type="bibr" rid="ref1 ref14 ref3">14,1,3</xref>
        ] was the most studied. There are already works in other languages such
as Dutch [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], Catalan [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], Arabic [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and still few are considered in Spanish [
        <xref ref-type="bibr" rid="ref12 ref6">12,6</xref>
        ].
SemEval 2018 Shared Task[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] on \irony detection in tweets" and Ironita 2018 [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
were the rst tasks in irony detection. In SemEval 2018 \The systems that were
submitted represent a variety of neural-network-based approaches (i.e. CNNs,
RNNs and (bi-)LSTMs) exploiting word and character embedding as well as
handcrafted features. Other popular classi cation algorithms include Support
Vector Machines, Maximum Entropy, Random Forest, and Naive Bayes. While
most approaches were based on one algorithm, some participants experimented
with ensemble learners (e.g. SVM + LR, CNN + bi-LSTM, stacked LSTMs),
implemented a voting system or built a cascaded architecture (for Task B) that
rst distinguished ironic from nonironic tweets and subsequently di erentiated
between the ne-grained irony categories."[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        IroSvA (Irony Detection in Spanish Variants) is the rst shared task fully
dedicated to identify the presence of irony in short messages (tweets and news
comments) written in Spanish. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>The task will be structured into three subtasks, each one of these, for predicting
whether messages are ironic or not, in one of the three Spanish variants. The
three subtasks aim to the same goal: participants should determine whether a
message is ironic or not according to speci ed context.</p>
      <p>
        Recent advances in detection of irony have shown that the supervised classi
cation methodology with a great extent of feature engineering produces satisfactory
indicators for irony or sarcasm. This methodology has been tested in short texts,
such as product reviews, news commentaries and tweets [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Supervised
Classication focused in determine the class of an object based on a set of known
objects grouped by class.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Proposal presented</title>
      <p>
        Our approach is based on the representation of the text using three di erent
vectors of features. For classifying a new text in the Ironic or Non-ironic class,
we used the proposed General Impostor Method (GIM) of [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], but with a
simpli ed variation of the Impostor Method (IM) presented by [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The similarity
between texts is de ned by a weighted similarity considering the three vector
representation. In the next sections we explain the representation, the weighted
similarity and the GIM. The IM uses a set of Non-ironic documents related with
the Ironic sample and analyze the similarity of an unknown document with the
set and the Ironic sample. The unknown document would be Ironic if it is more
similar to the Ironic one than a randomly subset of the Non-ironic documents.
2.1
      </p>
      <sec id="sec-2-1">
        <title>Document representation</title>
        <p>A text (document) is modeled as an object with three vector representations,
but using some features in common. The idea behind this type of representation
is to adjust what representation could impact for di erent genres of text.
One representation, and denoted by R1, is based on the classical Bag of Word
(BoW) vector and we build this with the tokens extracted from a Natural
Language tokenizer or with lemmas extracted using a Natural Language lemmatizer.
The second representation, R2, is based only on the frequency of Punctuation
Sign, considering all punctuation extracted by a Part of Speech Tagger. For the
third representation, R3, are computed di erent stylistic features and the
frequency of them. Some of the stylistic features are: entire capitalized words (QUE
BUENO), character ooding (oooooohhhhhh), repetition of closed exclamation
sign (!!!!!!!), etc.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Similarity measures</title>
        <p>The similarity measures to compare two documents needs to consider all the
representations proposed, but also to be exible, so that we can use only some
of them. For that reason we use the following measure.</p>
        <p>(D1; D2) =
1(D1R1 ; D2R1 ) +
2(D1R2 ; D2R2 ) +
3(D1R3 ; D2R3 ) (1)
For the similarity function we implemented di erent similarities proposed
in the literature, for example, cosene, jaccard, dice, tanimoto or distance like
euclidean or minmax. In the evaluation phase we tested with all of them and
used the one that allows us to obtain the best result for each representation.
The parameters ; ; , let us give importance to the representation, and if one
of the parameters is 0, then, not considering that representation. + + = 1.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Impostor method</title>
        <p>We used the proposed General Impostor Method (GIM), but not considering in
the IM the step of randomly choose a subset of features from the representation
of text.</p>
        <p>The set of impostors S corresponds to the set of non-ironic documents provided
in the training dataset. D1 is the document to be classi ed and D2 is an ironic
document. is a parameter that needs to be adjusted using the training dataset
and allows us to determine that D1 is ironic if it has more similarity with ironic
D2 than a percentage of non-ironic impostors.</p>
        <sec id="sec-2-3-1">
          <title>Impostors Method (IM)</title>
          <p>Input: (D1; D2) A pair of documents. S: a set of impostors documents
Output: (ironic) or (non-ironic)
1. Score = 0
2. Repeat k times
a. Randomly choose n impostors from S : I1; :::; In
b. Score+ = k1 if (D1; D2) (D2; D1) &gt; (D1; Ii) (D2; Ii);
for each i 2 f1:::ng
3. return (ironic) if Score &gt; ; else (non-ironic)</p>
          <p>The set of Non-ironic texts are considered the impostors texts and for the
classi cation of the evaluation dataset we used all the Non-ironic texts provided
in the training dataset. All the parameters were optimized and the best value
used for the evaluation phase and resumed in the Evaluation section. Next we
present the pseudo-code of IM and GIM.</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>General Impostors Method (GIM)</title>
          <p>Input: (D) A document to be classi ed. Y : (D1; :::; Dn) ironic documents
Output: (ironic) or (non-ironic)
1. For each pair of documents (D; Di)</p>
          <p>a. Run original IM to obtain a similarity binary score S(D; Di)
2. Score = Average over similarity scores [S(D; D1); :::; S(D; Dn)]
3. return (ironic) if Score &gt; ; else (non-ironic)
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Evaluation results</title>
      <p>The evaluation was executed for the three variants of Spanish language
presented by the task, and for each of them we needed to optimize the parameters
of the GIM and the weighted similarity measure. For that purpose we run a 10
cross fold validation over the training dataset provided. The range of the
parameters evaluated was varied and the best parameter chosen is illustrated in the
Table 1(parameters of UO-run2).</p>
      <p>
        The structure and data distribution for the training and test datasets were
presented by [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and also the metrics used for evaluation. The similarity
between documents was calculated considering the euclidean distance function for
each of the three representation, because it gets the best results in the cross
validation evaluation over the rest of the comparison functions implemented.
3.1
      </p>
      <sec id="sec-3-1">
        <title>Discussion</title>
        <p>
          In the Table 2, we present in row 1 and row 2 the results achieved by the two
runs sent to the task, and all the baselines exposed by the organizers, and the
highest value obtained by[
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. The main di erence of UO-run1 and UO-run2 is
that in run1 the BoW representation takes as features the token extracted by
the Freeling[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] NLP tokenizer and in run2 the BoW representation considers as
features the lemma extracted by the FreeLing lematizer. Also the parameters
and were 0.5 for UO-run1.
        </p>
        <p>Our best result was achieved for UO-run2, and for the cuban variant, also
similar to those of two of the baselines. It is important to notice that with lemma
representation the results were always better than the representation based only
on lexical tokens, because in the rst one we reduced the lexical variety of words
referring to the same lemma.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions and Future Work</title>
      <p>In general, based on the results achieved in the cross validation from the training
dataset of the Spain and Mexican dataset (spanish tweets), the R3 representation
gets the worse results and this is based on the stylistic variety between ironic
tweets, and also the similarity in stylistic features between ironic and non-ironic
tweets.</p>
      <p>As future directions, we will introduce representations based on Word nGrams
and feature selection methods based on the importance of features by class.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Barbieri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saggion</surname>
          </string-name>
          , H.:
          <article-title>Modelling irony in twitter: Feature analysis and evaluation</article-title>
          .
          <source>In: Proceedings of the Ninth International Conference on Language Resources and Evaluation</source>
          ,
          <string-name>
            <surname>LREC</surname>
          </string-name>
          <year>2014</year>
          , Reykjavik, Iceland, May
          <volume>26</volume>
          -31,
          <year>2014</year>
          . pp.
          <volume>4258</volume>
          {
          <issue>4264</issue>
          (
          <year>2014</year>
          ), http://www.lrec-conf.org/proceedings/lrec2014/summaries/231.html
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Cignarella</surname>
            ,
            <given-names>A.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frenda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Basile</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosco</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patti</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Overview of the EVALITA 2018 task on irony detection in italian tweets (ironita)</article-title>
          .
          <source>In: Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA</source>
          <year>2018</year>
          )
          <article-title>co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it</article-title>
          <year>2018</year>
          ), Turin, Italy,
          <source>December 12-13</source>
          ,
          <year>2018</year>
          . (
          <year>2018</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2263</volume>
          /paper005.pdf
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. Far as,
          <string-name>
            <surname>D.I.H.</surname>
          </string-name>
          :
          <article-title>Irony and Sarcasm Detection in Twitter: The Role of A ective Content</article-title>
          . phdthesis, Universidad Politecnica de Valencia (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Hee</surname>
            ,
            <given-names>C.V.</given-names>
          </string-name>
          :
          <article-title>Exploring automatic irony detection on social media</article-title>
          . phdthesis, Universidad de Gent (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Hee</surname>
            ,
            <given-names>C.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lefever</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hoste</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Semeval-2018 task 3: Irony detection in english tweets</article-title>
          .
          <source>In: Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT</source>
          <year>2018</year>
          , New Orleans, Louisiana, USA, June 5-6,
          <year>2018</year>
          . pp.
          <volume>39</volume>
          {
          <issue>50</issue>
          (
          <year>2018</year>
          ), https://aclanthology.info/papers/S18-1005/s18-
          <fpage>1005</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Jasso</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <article-title>Meza-Ru z, I.V.: Character and word baselines systems for irony detection in spanish short texts</article-title>
          .
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>56</volume>
          ,
          <issue>41</issue>
          {
          <fpage>48</fpage>
          (
          <year>2016</year>
          ), http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/5285
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Karoui</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zitoune</surname>
            ,
            <given-names>F.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moriceau</surname>
          </string-name>
          , V.:
          <article-title>SOUKHRIA: towards an irony detection system for arabic in social media</article-title>
          .
          <source>In: Third International Conference On Arabic Computational Linguistics, ACLING 2017, November 5-6</source>
          ,
          <year>2017</year>
          , Dubai, United Arab Emirates. pp.
          <volume>161</volume>
          {
          <issue>168</issue>
          (
          <year>2017</year>
          ). https://doi.org/10.1016/j.procs.
          <year>2017</year>
          .
          <volume>10</volume>
          .105, https://doi.org/10.1016/j.procs.
          <year>2017</year>
          .
          <volume>10</volume>
          .105
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Liebrecht</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kunneman</surname>
          </string-name>
          , F., van den Bosch, A.:
          <article-title>The perfect solution for detecting sarcasm in tweets #not</article-title>
          .
          <source>In: Proceedings of the 4th Workshop on Computational Approaches</source>
          to Subjectivity,
          <article-title>Sentiment and Social Media Analysis</article-title>
          ,
          <source>WASSA@NAACL-HLT</source>
          <year>2013</year>
          , 14 June 2013, Atlanta, Georgia, USA. pp.
          <volume>29</volume>
          {
          <issue>37</issue>
          (
          <year>2013</year>
          ), http://aclweb.org/anthology/W/W13/W13-1605.pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. Mun~oz,
          <string-name>
            <surname>J.R.:</surname>
          </string-name>
          <article-title>TwIrony: Identi cacion de la iron a en Tweets en Catalan</article-title>
          . candthesis, Universitat Pompeu Fabra (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Ortega-Bueno</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rangel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Hernandez Far as,
          <string-name>
            <given-names>D.I.</given-names>
            ,
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Montes-</surname>
          </string-name>
          y-Gomez,
          <string-name>
            <surname>M.</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Medina</given-names>
            <surname>Pagola</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.E.</surname>
          </string-name>
          :
          <article-title>Overview of the Task on Irony Detection in Spanish Variants</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ),
          <article-title>co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN</article-title>
          <year>2019</year>
          ).
          <article-title>CEUR-WS.org (</article-title>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Padro</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stanilovsky</surname>
          </string-name>
          , E.:
          <article-title>Freeling 3.0: Towards wider multilinguality</article-title>
          .
          <source>In: Proceedings of the Eighth International Conference on Language Resources and Evaluation</source>
          ,
          <string-name>
            <surname>LREC</surname>
          </string-name>
          <year>2012</year>
          , Istanbul, Turkey, May
          <volume>23</volume>
          -25,
          <year>2012</year>
          . pp.
          <volume>2473</volume>
          {
          <issue>2479</issue>
          (
          <year>2012</year>
          ), http://www.lrec-conf.org/proceedings/lrec2012/summaries/430.html
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Pinto</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Modelo de Deteccion Automatica de Iron</surname>
          </string-name>
          <article-title>a en Textos en Espan~ol</article-title>
          . mathesis (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Rangel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Franco-Salvador.</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>A low dimensionality representation for language variety identi cation</article-title>
          .
          <source>In: 17th International Conference on Intelligent Text Processing and Computational Linguistics</source>
          ,
          <source>CICLing'16</source>
          . Springer-Verlag,
          <source>LNCS(9624)</source>
          , pp.
          <fpage>156</fpage>
          -
          <lpage>169</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Reyes</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Veale</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>A multidimensional approach for detecting irony in twitter</article-title>
          .
          <source>Language Resources and Evaluation</source>
          <volume>47</volume>
          (
          <issue>1</issue>
          ),
          <volume>239</volume>
          {
          <fpage>268</fpage>
          (
          <year>2013</year>
          ). https://doi.org/10.1007/s10579-012-9196-x, https://doi.org/10.1007/s10579-012- 9196-x
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Seidman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Authorship veri cation using the impostors method notebook for PAN at CLEF 2013</article-title>
          . In: Working Notes for CLEF 2013 Conference , Valencia, Spain,
          <source>September 23-26</source>
          ,
          <year>2013</year>
          . (
          <year>2013</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>1179</volume>
          /
          <fpage>CLEF2013wn</fpage>
          -PANSeidman2013.pdf
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Wallace</surname>
            ,
            <given-names>B.C.</given-names>
          </string-name>
          :
          <article-title>Computational irony: A survey and new perspectives</article-title>
          .
          <source>Artif. Intell. Rev</source>
          .
          <volume>43</volume>
          (
          <issue>4</issue>
          ),
          <volume>467</volume>
          {
          <fpage>483</fpage>
          (
          <year>2015</year>
          ). https://doi.org/10.1007/s10462-012-9392-5, https://doi.org/10.1007/s10462-012-9392-5
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>