<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>UMUTeam at TASS 2020: Combining Linguistic Features and Machine-learning Models for Sentiment Classification</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>José Antonio García-Díaz</string-name>
          <email>joseantonio.garcia8@um.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ángela Almela</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafael Valencia-García</string-name>
          <email>valencia@um.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Facultad de Informática, Universidad de Murcia, Campus de Espinardo</institution>
          ,
          <addr-line>30100</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Facultad de Letras, Universidad de Murcia, Campus de La Merced</institution>
          ,
          <addr-line>30001, Murcia</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <fpage>187</fpage>
      <lpage>196</lpage>
      <abstract>
        <p>This paper describes the participation of the UMUTeam at the TASS'2020 Workshop on Sentiment Analysis, in which two tracks were proposed. The first track consists in the classification of tweets according to general sentiments of tweets written in several Spanish varieties, whereas the second task consists in a fine-grained distinction between the six basic emotions described by Ekman (2009). Our proposal is based on the usage of linguistic features alone or in combination with word-embeddings. Specifically, we test Convolutional Neural Networks and Support Vector Machines with sentence embeddings. Although our proposal did not achieve the best results, we obtained the best precision rate regarding emotion detection (Task 2) and competitive results with respect to the general sentiment classification in which tweets written in diferent varieties of Spanish were mixed. We consider that our proposal, despite its limitations, provides substantial benefits such as the interpretability of the results.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Sentiment Analysis</kwd>
        <kwd>Supervised learning</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>Machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]: anger, disgust, fear, joy, sadness and surprise along with others in order to classify those
tweets which do not agree with the aforementioned labels. All datasets, regardless of the task,
were unbalanced: Task 1 contained more tweets labelled as neutral and Task 2 contained more
tweets labelled as others.
      </p>
      <p>We participated both in Task 1 and Task 2. In a nutshell, our proposal consisted in testing the
reliability of linguistic features alone or in combination with well-known machine-learning and
deep-learning models. Linguistic features, in comparison with statistical approaches, ease the
interpretability of the results because they are conceptually higher-level features than words.
With our participation, we attempted to answer the following research questions:
• RQ1. Is the usage of linguistic features enough to compete with state-of-the-art approaches
based on statistical methods?
• RQ2. Can linguistic features be combined with statistical methods in order to improve
the accuracy of the results while keeping interpretability?
• RQ3. Is the reliability of linguistic features afected by diferent cultural background in
the same language?</p>
      <p>To answer these research questions, we extracted linguistic features from the datasets by
using a self-developed tool designed from scratch for the Spanish language and which is part
of the first author’s doctoral thesis. These linguistic features were evaluated separately or in
combination with word-embeddings with a Convolutional Neural Network (CNN) and with
sentence-embeddings with Support Vector Machines (SVM).</p>
      <p>The rest of the paper is organised as follows. In Section 2, the datasets provided to the
participants are described, whereas Section 3 describes the materials and methods used in our
proposal. Then, Section 4 shows the results from each model and task along with an analysis of
the results. Finally, Section 5 summarises the results according to the research questions and
describes further improvement actions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Corpus description</title>
      <p>
        The datasets involved in these tasks were provided by the organisers of the TASS workshop
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The corpus was compiled from Twitter in April 2019 and it contains topics from diferent
domains, such as politics, entertainment, catastrophes, and global events, among others. The
corpus contains approximately 16k tweets for Task 1, and 8.4k tweets for Task 2 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. For each
task, the corpus was released as three datasets, namely, training, development, and testing.
Table 1 shows the distribution of the datasets for each task. Note that the size of the training
and development datasets of Subtask 1.2 is 0 because participants were expected to use the
training and development datasets of Subtask 1.1.
      </p>
      <p>It is worth noting that hashtags and mentions were replaced by the tokens HASHTAG and
@USER in order to prevent that automatic classifiers could overfit their results based on wrong
assumptions. Another significant fact was that the label Neutral also referred to those tweets
where no sentiment was assigned during the manual classification of the corpus. This is a
Model
Subtask 1.1 (es)
Subtask 1.1 (pe)
Subtask 1.1 (cr)
Subtask 1.1 (uy)
Subtask 1.1 (mx)
Subtask 1.2
Subtask 2
novelty with respect to previous editions of the TASS Workshop, in which tweets had been
classified as neutral and none separately.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Materials and methods</title>
      <p>In this section, we describe (1) the preprocessing techniques applied to the corpus according to
our proposal (see Section 3.1), (2) the linguistic features extracted (see Section 3.2), and (3) the
machine-learning models used for training the sentiment classifiers (see Section 3.3).</p>
      <sec id="sec-3-1">
        <title>3.1. Preprocessing stage</title>
        <p>Our preprocessing stage involved (1) encoding each letter to its lowercase form, (2) fixing
misspelling and typos by using the Aspell library1, (3) contracting white-space characters, such
as spaces, tabs or new lines, and (4) removing expressive lengthening (the intentional elongation
of letters in a word to emphasise it). The normalised version of each tweet is used as input
for extracting those linguistic features that are dictionary based. However, we also kept the
original version of the tweet to extract certain features regarding the usage of uppercase letters
(which may be indicative of shouting or emphasis) or to identify the number of misspellings
among other features.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Linguistic feature extraction</title>
        <p>
          For the extraction of linguistic features we used UMUTextStats [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], a self-developed linguistic
tool for text analysis that is based on Linguistic Inquiry and Word Count (LIWC) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. LIWC is
an analysis tool that counts words which belong to pre-established categories based on
part-ofspeech categories as well as other categories (family, sex, and death, to name but a few). One
of LIWC’s greatest strengths is that has been validated under diferent domains. For example,
it has been used in opinion mining [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], in the analysis of suicide notes [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], in cyber-bullying
detection [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], or satire identification [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. Although LIWC was originally conceived for the
English language, it has a translated version available for Spanish. However, in contrast with
LIWC, our proposal handles specific phenomena of the Spanish language such as grammatical
gender, as well as drawing a fine-grained distinction among PoS categories, such as adjectives,
adverbs, verbs, and sufixes, among others. A further strength of UMUTextStats over LIWC is
that UMUTextStats allows for complex regular expressions that are helpful in order to capture
complex features such as discourse markers, by means of which diferent arguments connected
in a text can be captured. The current version of UMUTextStats captures a total of 311 diferent
linguistic features organised in the following categories:
• Grammatical features (GRA). The features within this category are organised
according to the Part-of-Speech (PoS) taxonomy. It includes verbs, adjectives, adverbs,
determiners, pronouns, and conjunctions, to name but a few. Moreover, this category is organised
into a more fine-grained distinction than LIWC. This decision was made because Spanish,
as a highly inflected language, makes use of gender and number matching rules which
may indicate when users are reporting facts, desires or hypothetical events.
• Morphological features (MOR). The features within this category extract information
from components of the words including prefixes and sufixes (distinguishing between
diferent types, namely denominal morphemes, deverbal morphemes, deadjectival
morphemes, and evaluative sufixation by means of diminutives, augmentatives, and
pejoratives). This category also contains features to match grammatical number (singular or
plural).
• Spelling and stylistic mistakes (ERR). In this category, we distinguish between
stylistic patterns, which denote the usage of colloquial language in order to detect atypical
patterns, and linguistic errors, which can either denote the low cultural level of the
writers, that is to say, errors in competence, or that they have failed to properly revise
their writings before publishing them, that is, errors in performance.
• Figurative language (FIG). The features within this category are related to the usage
of idioms, understatements, hyperboles or any other rhetorical device whose intention is
to deviate the meaning of an utterance from its literal meaning [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
• Linguistic processes (LPR). This category contains stylometric features such as
number of words, syllables, and sentences. It also distinguishes between diferent types of
sentences, such as interrogatory, exclamatory or literal quotation.
• Symbols (SYM). The features within this category aims to capture sentence dividers,
such as spaces, colons, semicolons among other general-purpose symbols.
• Entity and topic extraction (ENT). These contain a list of general topics, including
animals, food, jobs, clothes or body-parts, in order to determine the general topics of a
text. In this category, we include the usage of inclusive language, analytic thinking,
achievements and failures, and risk perception.
• Sentiments (SEN). The features within this category aims to determine general words
and expressions related to positive and negative feelings.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Models based on word-embeddings</title>
        <p>
          The state of the art regarding SA makes use of word-embeddings and deep-learning methods.
Word-embeddings, compared with traditional methods such as those models based on n-grams
and one-hot-vectors, have two significant advantages. On the one hand, they represent words
as dense vectors rather than sparse vectors. The main idea beyond this approach is that dense
vectors allow clustering words with similar meanings. On the other, word-embeddings can be
initialised by applying unsupervised techniques from general purpose trained sources, such as
social networks, free encyclopedias or news sites, instead of being initialised with random values.
Moreover, the idea of word-embeddings could be extended from words to whole texts, in order
to represent sentences of paragraphs as dense vectors [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. In this sense, sentence-embeddings
are calculated by averaging the embeddings of the words that compose it.
        </p>
        <p>Our participation in the TASS-2020 workshop involved three runs. The first run (LF + WE)
consisted in linguistic features trained with a Multilayer Perceptron in combination with
wordembeddings trained with a Convolutional Neural Network (CNN), the second run (LF) entailed
the linguistic features trained with Support Vector Machines, and the third run (LF + SE) involved
combining the linguistic features with sentence embeddings.</p>
        <p>
          For the first run we used the functional API of Keras [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] to create a classifier that combines
the inputs from a CNN [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], composed by a Embedding Layer with Spanish pre-trained
wordembeddings from fastText [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] and a multilayer perceptron for training the linguistic features.
The outputs for both deep-learning models where concatenated and combined with two more
deep-learning layers and the the output layer for the final prediction. The architecture diagram
of the first run is shown in Figure 1. We proposed the usage of CNN in order to exploit
the spatial dimension of word-embeddings because these kinds of networks are able to find
common patterns of words regardless of their position in the text, which can help solve some
NLP problems such as polysemy. In this run, the training dataset was used for training and the
development dataset for evaluating our proposal.
        </p>
        <p>
          For the second and the third runs, we used the Weka platform [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] to evaluate the reliability
of using linguistic features separately (LF) and in combination with sentence-embeddings (LF +
SE). Both runs were trained with Sequential Minimal Optimisation (SMO), a machine-learning
classifier which is based on Support Vector Machines (SVMs). Specifically, we set this SVM to
use a polynomial kernel in order to learn from non-lineal models. These runs were trained with
the combination of the training and development datasets.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>Our team participated in all the tasks proposed in the TASS-2020 workshop. All runs were
evaluated using the macro-averaged versions of F-measure (F1), Precision (P), and Recall (R).
First, Table 2 contains the results of our runs for Subtask 1.1 regarding monolingual classification.
For the first run, with the combination of CNN and linguistic features, the European Spanish
dataset (es) achieved the best result with a macro F1-measure of 0.503311. As precision was
higher than recall, we can assume that our proposal fails regarding class detection, but when
this detection is successful, the results are quite reliable. However, the rest of the Spanish
varieties achieved significantly worse results between a macro F1-measure of 0.322492 for the
Costa Rica dataset and 0.391288 for the Uruguay dataset. We can also observe that precision
and recall were more similar between them. However, for runs 2 and 3, where the training was
performed by combining the training and development datasets, the results varied and both
runs achieved their best result with the Uruguay dataset (0.49832 for run 2 and 0.520168 for run
3). However, with the European Spanish dataset the results are lower than the ones achieved
for the first run, owing to the lack of the development dataset during the training stage.</p>
      <p>When our results are compared with the rest of the participants, it will be observed that
they achieved more stable results and their macro-F1 measure is much more similar across the
diferent datasets. For example, the best result, achieved by daniel.palomino.paucar, obtained a
macro-F1 measure of 0.64667, which outperforms the results from our proposal.</p>
      <p>Next, the results achieved by our proposal for Subtask 1.2 regarding multilingual classification
are shown in Table 3. In this subtask, we merged the datasets provided for Subtask 1.1 in order
to create a multilingual classifier. It will be observed in the results that the runs for this subtask
were similar to the ones achieved in Subtask 1.1, achieving our best macro F1-measure with
0.357876 for run 1, but obtaining similar precision and recall rates. The best result achieved in
this subtask among all the participants was 0.497966. However, as the number of participants
for this subtask was low, this result must be taken with caution. It is worth noting that the rules
of the tasks allowed participants to use external corpora or linguistic resources. However, our
participation was limited to the datasets provided. In this sense, we could not provide a fair
comparison with the rest of the participants until we analyse their approaches in detail.</p>
      <p>Compared with purely statistical models such as the ones based on word-embeddings,
linguistic features provide interpretability. It is possible, therefore, to obtain the most discriminatory
linguistic features by calculating Information Gain (IG), which is a metric used by ensemble
methods such as decision trees in order to determine when a new branch must be created.
Figure 2 shows the 20 best features with major information for the combination of the training
datasets of the diferent datasets provided for Subtask 1.1. We can observe that Sentiments (SEN)
is the linguistic category which provides the largest number of linguistic features including
positive, negative, anger, ofensive and sad. Out of these features, positive is, by far, the most
discriminatory feature. Grammatical features (GRA) is another linguistic feature that provides
several features, such as adverbs-negation and adjectives-qualifying. It is worth noting that these
linguistic features are hard to obtain by applying models based on word-embeddings like those
linguistic features regarding stylistic patterns (misspellings, linguistic errors, and swear), as
well as other features used to add emphasis such as the number of exclamatory sentences.</p>
      <p>Finally, the results from Task 2, emotion detection, are shown in Table 4. As can be observed,
the usage of linguistic features and sentence embeddings (run 3) achieved the best result over
the rest of our runs. It worth noting that run 3 only achieves slightly better results than the
run 2 (LF), but with lower precision and higher recall. Our best macro F1-measure is 0.378694,
which is reasonably successful, in view of the complexity of the task although the best score
was achieved by jogonba2elirf with a macro F1-measure of 0.446582.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and further work</title>
      <p>In this paper, the participation of the UMUTeam in the TASS’2020 Workshop on Sentiment
Classification applying linguistic features has been described. For Subtask 1.1, our proposal
achieved decent results only for one dataset: European Spanish with CNN and linguistic features,
and for the Uruguay dataset with SMO and linguistic features with average word-embeddings.
For Subtask 1.2, which consisted in the combination of the datasets from diferent Spanish
varieties, we achieved more stable results irrespective of the model applied. However, the low
number of participants hindered the achievement of better insights. Finally, for Task 2, which
consisted in emotion detection, we achieved reasonably good results with a macro F1 of 0.372774
over six diferent classes.</p>
      <p>As regards RQ1 and RQ2, we have observed that our proposal achieves good results when
comparing the usage of LF separately with the rest of our runs, but the results are far from
being the best results. In this sense, we need to perform a more detailed analysis of the rest
of the participants in order to identify the weakness of our proposal. Regarding RQ3, we have
observed that our proposal does not provide stable results for Subtask 1.1 when the diferent
dialects are considered separately, but the results were more stable when training and testing
contained tweets from the diferent dialects as the same time. This fact suggests that some of
the linguistic features between each dialect and cultural background are complementary, but
not enough to perform a reliable sentiment classification.</p>
      <p>
        We are well satisfied with our participation for the first time in a TASS workshop and our
competition in challenging NLP tasks. We are, however, aware of the limitations of our proposal
and the long way to go. To our mind, the main drawback is that we focused only on European
Spanish during the design of the linguistic features. We will, therefore, focus on improvements
for other varieties enabling the adaptation of the system. Regarding the technological aspect,
we will include the hyper-parameter tuning in our pipeline, in order to choose the optimal
hyper-parameters for a learning algorithm. We will also try to incorporate other pre-trained
and contextualised word-embeddings such as ELMo [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and BERT [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work has been supported by the Spanish National Research Agency (AEI) and the European
Regional Development Fund (FEDER/ERDF) through projects KBS4FIA (TIN2016-76323-R) and
LaTe4PSP (PID2019-107652RB-I00). In addition, José Antonio García-Díaz has been supported
by Banco Santander and University of Murcia through the Doctorado industrial programme.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ekman</surname>
          </string-name>
          ,
          <article-title>Lie catching and microexpressions</article-title>
          ,
          <source>The philosophy of deception 1</source>
          (
          <year>2009</year>
          )
          <article-title>5</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>García-Vega</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Díaz-Galiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>García-Cumbreras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Montejo</given-names>
            <surname>Ráez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Jiménez Zafra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Martínez-Cámara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. A.</given-names>
            <surname>Murillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Casasola</given-names>
            <surname>Murillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chiruzzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Moctezuma</surname>
          </string-name>
          , Sobrevilla, Overview of tass 2020:
          <article-title>Introduction emotion detection</article-title>
          ,
          <source>in: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2020</year>
          ), volume
          <volume>2664</volume>
          <source>of CEUR Workshop Proceedings</source>
          , CEUR-WS, Málaga, Spain,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Plaza del Arco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Strapparava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Urena Lopez</surname>
          </string-name>
          , M. Martin,
          <string-name>
            <surname>EmoEvent:</surname>
          </string-name>
          <article-title>A multilingual emotion corpus based on diferent events</article-title>
          ,
          <source>in: Proceedings of The 12th Language Resources and Evaluation Conference</source>
          , European Language Resources Association, Marseille, France,
          <year>2020</year>
          , pp.
          <fpage>1492</fpage>
          -
          <lpage>1498</lpage>
          . URL: https://www.aclweb.org/anthology/2020.lrec-
          <volume>1</volume>
          .
          <fpage>186</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>García-Díaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cánovas-García</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Valencia-García</surname>
          </string-name>
          ,
          <article-title>Ontology-driven aspect-based sentiment analysis classification: An infodemiological case study regarding infectious diseases in latin america</article-title>
          ,
          <source>Future Generation Computer Systems</source>
          <volume>112</volume>
          (
          <year>2020</year>
          )
          <fpage>614</fpage>
          -
          <lpage>657</lpage>
          . doi:https://doi.org/10.1016/j.future.
          <year>2020</year>
          .
          <volume>06</volume>
          .019.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y. R.</given-names>
            <surname>Tausczik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Pennebaker</surname>
          </string-name>
          ,
          <article-title>The psychological meaning of words: Liwc and computerized text analysis methods</article-title>
          ,
          <source>Journal of language and social psychology 29</source>
          (
          <year>2010</year>
          )
          <fpage>24</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. del Pilar</given-names>
            <surname>Salas-Zárate</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>López-López</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Valencia-García</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>Aussenac-Gilles, Á</article-title>
          . Almela,
          <string-name>
            <given-names>G.</given-names>
            <surname>Alor-Hernández</surname>
          </string-name>
          ,
          <article-title>A study on LIWC categories for opinion mining in spanish reviews</article-title>
          ,
          <source>J. Inf. Sci</source>
          .
          <volume>40</volume>
          (
          <year>2014</year>
          )
          <fpage>749</fpage>
          -
          <lpage>760</lpage>
          . URL: https://doi.org/10.1177/0165551514547842. doi:
          <volume>10</volume>
          .1177/ 0165551514547842.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B. O</given-names>
            <surname>'dea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Larsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Batterham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Calear</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Christensen</surname>
          </string-name>
          ,
          <article-title>A linguistic analysis of suicide-related twitter posts</article-title>
          .,
          <source>Crisis: The Journal of Crisis Intervention and Suicide Prevention</source>
          <volume>38</volume>
          (
          <year>2017</year>
          )
          <fpage>319</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          , C. Jose, Toward multimodal cyberbullying detection,
          <source>in: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, volume Part F127655</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>2090</fpage>
          -
          <lpage>2099</lpage>
          . doi:
          <volume>10</volume>
          .1145/3027063.3053169.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>del Pilar</surname>
          </string-name>
          Salas-Zárate,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Paredes-Valverde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Rodríguez-García</surname>
            , R. ValenciaGarcía,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Alor-Hernández</surname>
          </string-name>
          ,
          <article-title>Automatic detection of satire in twitter: A psycholinguisticbased approach</article-title>
          ,
          <source>Knowl. Based Syst</source>
          .
          <volume>128</volume>
          (
          <year>2017</year>
          )
          <fpage>20</fpage>
          -
          <lpage>33</lpage>
          . URL: https://doi.org/10.1016/j.knosys.
          <year>2017</year>
          .
          <volume>04</volume>
          .009. doi:
          <volume>10</volume>
          .1016/j.knosys.
          <year>2017</year>
          .
          <volume>04</volume>
          .009.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>M. del Pilar</surname>
            Salas-Zárate,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Alor-Hernández</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          <string-name>
            <surname>Sánchez-Cervantes</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <string-name>
            <surname>ParedesValverde</surname>
            ,
            <given-names>J. L.</given-names>
          </string-name>
          <string-name>
            <surname>García-Alcaraz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Valencia-García</surname>
          </string-name>
          ,
          <article-title>Review of english literature on figurative language applied to social networks</article-title>
          ,
          <source>Knowl. Inf. Syst</source>
          .
          <volume>62</volume>
          (
          <year>2020</year>
          )
          <fpage>2105</fpage>
          -
          <lpage>2137</lpage>
          . URL: https: //doi.org/10.1007/s10115-019-01425-3. doi:
          <volume>10</volume>
          .1007/s10115-019-01425-3.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Cer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          , S.-y. Kong,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Limtiaco</surname>
          </string-name>
          ,
          <string-name>
            R. S. John,
            <given-names>N.</given-names>
            <surname>Constant</surname>
          </string-name>
          , M. GuajardoCespedes, S. Yuan,
          <string-name>
            <given-names>C.</given-names>
            <surname>Tar</surname>
          </string-name>
          , et al.,
          <source>Universal sentence encoder</source>
          , arXiv preprint arXiv:
          <year>1803</year>
          .
          <volume>11175</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Chollet</surname>
          </string-name>
          , et al.,
          <string-name>
            <surname>Keras</surname>
          </string-name>
          (
          <year>2015</year>
          ),
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Convolutional neural networks for sentence classification</article-title>
          ,
          <source>CoRR abs/1408</source>
          .5882 (
          <year>2014</year>
          ). URL: http://arxiv.org/abs/1408.5882. arXiv:
          <volume>1408</volume>
          .
          <fpage>5882</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Grave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bojanowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joulin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          ,
          <article-title>Learning word vectors for 157 languages</article-title>
          , CoRR abs/
          <year>1802</year>
          .06893 (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1802</year>
          .06893. arXiv:
          <year>1802</year>
          .06893.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hall</surname>
          </string-name>
          , E. Frank,
          <string-name>
            <given-names>G.</given-names>
            <surname>Holmes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pfahringer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Reutemann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. H.</given-names>
            <surname>Witten</surname>
          </string-name>
          ,
          <article-title>The weka data mining software: an update</article-title>
          ,
          <source>ACM SIGKDD explorations newsletter 11</source>
          (
          <year>2009</year>
          )
          <fpage>10</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Peters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Neumann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iyyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gardner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <article-title>Deep contextualized word representations</article-title>
          , CoRR abs/
          <year>1802</year>
          .05365 (
          <year>2018</year>
          ). URL: http://arxiv.org/ abs/
          <year>1802</year>
          .05365. arXiv:
          <year>1802</year>
          .05365.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          ,
          <year>2018</year>
          . arXiv:
          <year>1810</year>
          .04805.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>