<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Emotion Detection for Spanish by Combining LASER Embeddings, Topic Information, and O ense Features</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fedor Vitiugin</string-name>
          <email>fedor.vitiugin@upf.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giorgio Barnabo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universitat Pompeu Fabra</institution>
          ,
          <addr-line>Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes the system submitted by WSSC Team to the EmoEvalEs@IberLEF 2021 emotions detection competition. We propose a novel model for Emotion Detection that combines transformers embeddings with topic information and o ense features. The system classi es social media text emotions leveraging its context representations. Our results show that, for this kind of task, our model outperforms baselines and state-of-the-art text classi cation methods. As for the leader-board, our classi cation model achieved a macro weighted averaged F1 score of 0.661427, and a overall accuracy of 0.675725, reaching the 9th and 10th place respectively.</p>
      </abstract>
      <kwd-group>
        <kwd>Natural language processing</kwd>
        <kwd>Emotion detection</kwd>
        <kwd>Deep learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Emotion Detection is a branch of sentiment analysis that seeks to extract
negrained emotions from either speech/voice, image, or text data. Detecting
emotions from texts has proven to be quite a challenging task, regardless of the
quantity of available data [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Understanding emotions expressed by users on
social media is particularly hard due to the absence of voice modulation, facial
expressions, and other features that may work as clues during the context and
relation extraction process.
      </p>
      <p>
        Besides that, the need for disambiguating emotion-conveying words in order
to verify classi ed emotions as real emotions still represents a signi cant hitch,
since texts often contain expressions that could refer to di erent emotions. For
example, a phrase like \I can't stand it" could convey anger and disgust
depending on the context. Nonetheless, recently, state-of-the-art results were obtained
by using pre-trained transformer-based models. Needless to say, in the past three
years, pre-trained language models such as BERT [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] revolutionized the NLP
world allowing to achieve extraordinary results in almost any known task. These
models are particularly e ective because they generate word embeddings that
capture the semantic and contextual information of texts.
      </p>
      <p>
        The existing state-of-the-art emotion detection models usually only extract
context features from texts and pay less attention to external features like the
kind of event these messages were posted for. In our work, we tried to ll this gap
by including additional context information and by also considering the presence
of o enses inside these messages. LASER [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] embeddings were used to encode
the social media texts and were then combined with topic features and o ense
features.
      </p>
      <p>
        The main contribution of this study is an approach based on a combination of
contextualized work embeddings, topic information, and o ense features
specifically tailored for improving the emotion detection process. We evaluated our
methodology on the EmoEvalEs@IberLEF 2021 [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] competition dataset,
showing that our model outperforms baselines. We also analyzed the most frequent
mistakes that our model made.
      </p>
      <p>The remainder of this paper is organized as follows. We rst present the
related work, then we introduce our approach, and nally we show the experiment
results and the error analysis.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related work</title>
      <p>
        There are ve classes of approaches for recognizing emotions in texts:
keywordbased approaches, rule-based approaches, classical learning-based approaches,
hybrid approaches, and deep learning approaches[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Recent approaches for
emotions detection propose solutions that use deep learning techniques to classify
emotions in texts.
2.1
      </p>
      <sec id="sec-2-1">
        <title>LSTM</title>
        <p>
          Deep learning is a branch of machine learning in which deep neural network
architectures learn from experience and understand the world in terms of a
hierarchy of concepts, where each concept is de ned in terms of its relation to
simpler concepts. This approach allows a model to incrementally learn complex
concepts putting together simpler ones [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. In this context, the long short-term
memory (LSTM) architecture proved to be particularly e ective. LSTM is a
special form of recurrent neural network (RNN) with the capability of handling
long-term dependencies. LSTM overcomes the vanishing or exploding gradient
problem common in other type of RNNs.
        </p>
        <p>Here's a list of the main steps to take when using LSTM for emotion
recognition in texts:
1. text preprocessing, that is tokenization, stopwords removal, and
lemmatization;
2. encode texts through an embedding layer and then use these embeddings to
feed one or more LSTM layers.
3. delivering outputs into a dense neural network (DNN) with units equal to
the number of emotion labels and a sigmoid activation function to perform
classi cation.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Transformers</title>
        <p>
          The encoder block of transformers, initially designed for machine translation,
has become the de-facto standard pre-trained language modeling architecture
for solving most NLP tasks such as text classi cation, text generation, document
summarization, question answering just to name a few [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Up to now, several
state-of-the-art model for detecting text-based emotions already use BERT and
its variants.
        </p>
        <p>
          One of the ways to improve the performance of emotion classi cation is the
extension of BERT model by a linear transformation layer with sigmoid
activation. The proposed model was evaluated using the EmoBank data and obtained
a micro F1 score of 0.688 and 0.695 when ne-tuned on the ISEAR and SemEval
datasets, respectively [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Another way of using BERT for emotion classi
cation is to use a two-step approach that rst encode texts into vectors and then
classify them into emotions using the soft max classi er [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. One more way of
using BERT is extracting contextualized word embeddings from text data and
subsequently use SVM to perform classi cation. Authors of this approach[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] feed
the model with text passages of an average length of 650 tokens. Since BERT can
only process 512 input tokens, the essays were divided into sub-documents. The
sub-documents were pre-processed and fed into the BERT base model. Feature
vectors for the document were obtained by computing the mean of each of the
12 BERT layers' contextual token representations. The last four-layer
representations were then concatenated with the corresponding 84 Mairesse features for
the essay. The feature vector was then fed into the SVM classi er, producing a
prediction. The nal prediction was obtained through majority voting.
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Model</title>
      <p>3.1</p>
      <sec id="sec-3-1">
        <title>Pre-processing</title>
        <p>During the pre-processing step, we only detected and replaced all emojis with
their respective short-codes using a freely available Python library emoji. Since
data provided by organizers of the competition were polarized, they replaced all
the hashtags with the keyword \HASHTAG" in order to prevent the automatic
classi er from relying on hashtags to categorize the emotion associated with a
tweet. Moreover, the user mentions were replaced by \@USER".
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>LASER Embeddings</title>
        <p>
          For representing the input data, we used embeddings generated by two
pretrained transformer-based models: DistilBERT and Language-Agnostic
SEntence Representations (LASER) [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. The main di erence of LASER from other
transformers is generating sentence-level embeddings instead of word/token-level
embeddings.
        </p>
        <p>
          Given an input sentence, LASER provides sentence embeddings which are
obtained by applying max-pooling operation over the output of a Bidirectional
LSTM (BiLSTM) encoder. BiLSTM output is constructed by concatenating
outputs of two individual LSTMs working in opposite directions (forward and
backward). This way more contextual information is included in the output with
respect to a single LSTM reading text from left to right. In our experiments, we
used LASER to embed all tweet sentences into 1024-dimension xed-size vectors.
As additional features, we detected o enses and we extracted topics of tweets.
The both types of features were provided in the EmoEvent corpus. LASER
embeddings are passed as input to a Long Short Term Memory Network model
to encode the social media texts. Finally, we combined all features through the
architecture originally proposed for the detection of fake news articles [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. The
full architecture is shown in Figure 1.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiment</title>
      <sec id="sec-4-1">
        <title>Dataset Description</title>
        <p>
          We use the dataset released for the EmoEvalEs@IberLEF 2021 competition [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]
| shared task \Emotion detection and Evaluation for Spanish". The task
consists of classifying the emotion expressed in a tweet as one of the following
emotion classes:
{ anger (also includes annoyance and rage);
{ disgust (also includes disinterest, dislike, and loathing);
{ fear (also includes apprehension, anxiety, concern, and terror);
{ joy (also includes serenity and ecstasy);
{ sadness (also includes pensiveness and grief);
{ surprise (also includes distraction and amazement);
{ others: the emotion expressed in a tweet as `neutral or no emotion'.
        </p>
        <p>
          The dataset is based on events that took place in April 2019 related to
di erent domains: entertainment, catastrophe, political, global commemoration,
and global strike. There are messages in a total of 8 di erent topics. For the task
dataset was split into dev, training, and testing partitions. The distribution of
EmoEvalEs@IberLEF 2021 dataset is shown in Table 1.
The proposed model computes the feature vectors separately and then combines
these with the help of an MLP layer. We use categorical cross-entropy as the
loss function to optimize our architecture with a soft-max layer that tries to
classify any given social media text into one of seven emotion classes. The
hyperparameter setting is shown in Table 2. The full code is provided in the project
repository https://github.com/vitiugin/ComboLASER.
In the current work, we also used schemes with a combination of DistilBERT
embeddings. The concept of distillation in neural networks aims at speeding up
models. The key idea is to replace massive architectures with countless
parameters with a lightweight version of the same architecture that possesses fewer
parameters [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. The DistilBERT takes the architecture of the initial version of
BERT, reduces the number of layers in the BERT-base model by a factor of 2,
removes token embeddings and poolers to yield a much smaller and faster
version of BERT for general-purpose use. It applies dynamic masking and ignores
the next sentence predictions for better inference [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>
          According to recent surveys SVM is the most popular machine learning
scheme for emotion detection from text [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Subsequently, one of our baselines
is a model that concatenates vectors of transformer embeddings (LASER and
DistilBERT), topics and o ense features which passes them to an SVM classi er.
        </p>
        <p>To prove the need of using additional topic and o ense feature vectors we
also used only transformer embeddings as input to a LSTM model.
4.4</p>
      </sec>
      <sec id="sec-4-2">
        <title>Results</title>
        <p>As evaluation measures we used two multi-class classi cation metrics: accuracy
and macro weighted averaged F1 score. The full results on development and test
splits are shown in Table 3.</p>
        <p>We can observe that the SVM-based models with concatenated feature
vectors have high performance even compared with LSTM-based networks trained
only on transformers embeddings. Further LASER embeddings demonstrate
a higher performance compared with DistilBERT embeddings. The proposed
Combo LASER model shows the highest performance, which is perhaps due to
the fact that the model takes into consideration the sentence-level context
encoded in the LASER embeddings. In terms of performance, the proposed solution
is worse than the solution that took the rst place by 4.5%.</p>
        <p>Analysing our model mistakes, we found that our model often (more than
50% compared to the volume of class in tested data) mis-classi ed Disgust as
Anger and Fear as Sadness. On the other hand, the best results were achieved
for Sadness, Surprise, and Others (less than 25% of mistakes).</p>
        <p>We also found out that two pairs of emotions detected with mistakes on the
both sides: of Anger {Disgust and Joy {Other. While the similarity of the rst
pair could be explained by close nature of this emotions, the second pair could
be explained only by the size of trained and test data. Joy and Others classes
are over-represented in the train and test data.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>
        In this paper, we explored the bene t of incorporating transformers, topic
information, and o ense features to deep neural networks on the task of
multiclass emotions detection. We also presented our model based on extracted
pretrained LASER embeddings. Experiments on the dataset released during
EmoEvalEs@IberLEF 2021 competition demonstrate that our Combo LASER model
performs better than several baselines and additional features improves the
model performance compared with models based only on transformer
embeddings[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. We presented analyses of mistakes that our model made at classi
cation time which can inform future studies for emotion detection.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Acheampong</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nunoo-Mensah</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          :
          <article-title>Transformer models for textbased emotion detection: a review of bert-based approaches</article-title>
          .
          <source>Arti cial Intelligence</source>
          Review pp.
          <volume>1</volume>
          {
          <issue>41</issue>
          (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Al-Rfou</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Choe</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Constant</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jones</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Character-level language modeling with deeper self-attention</article-title>
          .
          <source>In: Proceedings of the AAAI Conference on Arti cial Intelligence</source>
          . vol.
          <volume>33</volume>
          , pp.
          <volume>3159</volume>
          {
          <issue>3166</issue>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Alswaidan</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Menai</surname>
            ,
            <given-names>M.E.B.</given-names>
          </string-name>
          :
          <article-title>A survey of state-of-the-art approaches for emotion recognition in text</article-title>
          .
          <source>Knowledge and Information Systems</source>
          pp.
          <volume>1</volume>
          {
          <issue>51</issue>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Artetxe</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwenk</surname>
          </string-name>
          , H.:
          <article-title>Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond</article-title>
          .
          <source>Transactions of the Association for Computational Linguistics</source>
          <volume>7</volume>
          ,
          <issue>597</issue>
          {
          <fpage>610</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Bhatt</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharma</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharma</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nagpal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raman</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mittal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>On the bene t of combining neural, statistical and external features for fake news identication</article-title>
          .
          <source>arXiv preprint arXiv:1712.03935</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Devlin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>M.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toutanova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Bert: Pre-training of deep bidirectional transformers for language understanding</article-title>
          . arXiv preprint arXiv:
          <year>1810</year>
          .
          <volume>04805</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Goodfellow</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Courville</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Deep learning</article-title>
          ,
          <source>vol. 1</source>
          . MIT press Cambridge (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kazameini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fatehi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mehta</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eetemadi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cambria</surname>
          </string-name>
          , E.:
          <article-title>Personality trait detection using bagged svm over bert word embedding ensembles</article-title>
          . arXiv preprint arXiv:
          <year>2010</year>
          .
          <volume>01309</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ott</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goyal</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Joshi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Levy</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zettlemoyer</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stoyanov</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          :
          <article-title>Roberta: A robustly optimized bert pretraining approach</article-title>
          . arXiv preprint arXiv:
          <year>1907</year>
          .
          <volume>11692</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Luo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Emotionx-hsu: Adopting pre-trained bert for emotion classi - cation</article-title>
          . arXiv preprint arXiv:
          <year>1907</year>
          .
          <volume>09669</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Montes</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aragon</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Agerri</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alvarez-Carmona</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alvarez Mellado</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carrillo-de Albornoz</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Freitas</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomez</surname>
            <given-names>Adorno</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Jimenez-Zafra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.M.</given-names>
            ,
            <surname>Lima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Plaza-de Arco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.M.</given-names>
            ,
            <surname>Taule</surname>
          </string-name>
          , M. (eds.):
          <source>Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2021</year>
          ) (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jeon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Toward dimensional emotion detection from categorical emotion annotations</article-title>
          . arXiv preprint arXiv:
          <year>1911</year>
          .
          <volume>02499</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <article-title>Plaza-del-</article-title>
          <string-name>
            <surname>Arco</surname>
            ,
            <given-names>F.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jimenez-Zafra</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montejo-Raez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Molina-Gonzalez</surname>
            ,
            <given-names>M.D.</given-names>
          </string-name>
          ,
          <article-title>Uren~a-</article-title>
          <string-name>
            <surname>Lopez</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
          </string-name>
          n-Valdivia, M.T.:
          <article-title>Overview of the EmoEvalEs task on emotion detection for Spanish at IberLEF 2021</article-title>
          .
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>67</volume>
          (
          <issue>0</issue>
          ) (
          <year>2021</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <article-title>Plaza-del-</article-title>
          <string-name>
            <surname>Arco</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Strapparava</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uren</surname>
            <given-names>~</given-names>
          </string-name>
          <article-title>a-</article-title>
          <string-name>
            <surname>Lopez</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martin-Valdivia</surname>
          </string-name>
          , M.T.:
          <article-title>EmoEvent: A Multilingual Emotion Corpus based on di erent Events</article-title>
          .
          <source>In: Proceedings of the 12th Language Resources and Evaluation Conference</source>
          . pp.
          <volume>1492</volume>
          {
          <fpage>1498</fpage>
          .
          <string-name>
            <surname>European Language Resources Association</surname>
          </string-name>
          , Marseille, France (May
          <year>2020</year>
          ), https://www.aclweb.org/anthology/2020.lrec-
          <volume>1</volume>
          .
          <fpage>186</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mou</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vechtomova</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          :
          <article-title>Distilling task-speci c knowledge from bert into simple neural networks</article-title>
          .
          <source>arXiv preprint arXiv:1903</source>
          .
          <volume>12136</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>