<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Emotions and News Structure: An Analysis of the Language of Fake News in Spanish</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Benedetta Togni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mariona Coll Ardanuy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Berta Chulvi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Rosso</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>PRHLT Research Center, Universitat Politècnica de València</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Social Psychology Department, Universitat de València</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Symanto Research</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Valencian Graduate School and Research Network of Artificial Intelligence (ValgrAI)</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Research has repeatedly demonstrated that fake news tend to appeal to the emotions, but it is less common to consider the presence of emotions in relation to the inverted pyramid, a key aspect of journalistic writing. In this work, we adopt an existing neural model for fake news detection, FakeFlow, one of the few approaches to consider the salience of emotions with respect to the structure of news articles, and adapt it for Spanish. We conduct our experiments on the Spanish Fake News Corpus, introduced at the Fake News Detection in Spanish shared task (FakeDeS), with the goal of gaining a better understanding of the characteristics underlying such texts. In our analyses, we show that both the distribution of afective features and the attention mechanism of the model validate the importance of considering the inverted pyramid structure for detecting fake news.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Fake news detection</kwd>
        <kwd>inverted pyramid</kwd>
        <kwd>emotion analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Spanish for the task of fake news detection. The best</title>
        <p>
          performing approach in the competition achieved an
In an increasingly connected world, research on detect- 1 of 0.76, leaving plenty of room (and need) for
ing fake news is more necessary than ever [
          <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
          ]. For improvement.
years, researchers from diferent disciplines (such as so- The aim of this paper is to gain a better understanding
ciology, NLP and network analysis) have been joining of the challenges of detecting fake news, from a linguistic
eforts to understand this phenomenon and find efective perspective, and with a focus on Spanish. The goal of
ways of addressing it. From a language perspective, to our research is to shed some light on this phenomenon
date most research has focused on English data. Even for and to obtain insights which may help researchers to
Spanish, one of the most-spoken languages in the world, informatively build better classifiers. In particular, we
detection of fake news is a much under-researched topic, look at the presence of afective content in the language
despite its becoming an increasingly important social con- of fake news. In doing so, we take into consideration a
cern.1 The Fake News Detection in Spanish shared tasks distinctive feature of journalistic writing: the inverted
(FakeDeS), held at the IberLEF 2020 and 2021 workshops, pyramid structure, according to which the most
imporare amongst the most notable eforts towards addressing tant information is presented at the beginning, with less
this problem from a NLP perspective. The organisers essential details following in descending order of
imporintroduced a new dataset, the Spanish Fake News Corpus tance. This style allows readers to grasp the essential
(henceforth SFNC) [
          <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
          ], to date the only dataset in elements of a story quickly, even if they only read the
ifrst few sentences [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>
          SLEaPnLgNua-2g0e2P4r:o4c0etshsiCnogn.fVearellnacdeoolifdt,hSepaSpina.n2i4sh-2S7oSceiepttye mfobreNra2t0u2r4a. l In text classification tasks, the information structure
$ btogni1@upvnet.upv.es (B. Togni); mcoll@prhlt.upv.es of news articles is not often accounted for, even though
(M. Coll Ardanuy); berta.chulvi@symanto.com (B. Chulvi); some research exists in this direction. In fact, the
bestprosso@dsic.upv.es (P. Rosso) performing approach at the FakeDeS 2021 competition
0000-0001-8455-7196 (M. Coll Ardanuy); 0000-0003-1169-0978 [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] used BERT [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] to encode the rfist and last 512 tokens
(B. Chulv©i);200240C0o0py-r0ig0h0tf2or-8th9is2p2ap-e1r 2by4i2ts a(uPth.oRrs.oUssesope)rmitted under Creative of the news articles, and concatenated the two
embedCPWrEooUrckReshdoinpgs 1IhStpIN:/c1e6u1nr3-w-0s.o7r3g MCCaoEmymUo2Rn0sL2Wic4eo,nsretkhAstethriboSupptioanPn4r.i0osInchteereCnadetiionnnatglrs(eCC(CfBoYEr4U.0S)R.o-cWioSlo.ogrigc)al Research idnintegnsdteodgetothcearpwtuitrhe athneardedlaittiioonnaslhmipebmeotwryeeenmsbaemddpilnegs.
(CIS) conducted a survey which revealed that 3.3% of the respon- The approach was built on the assumption that the
middents considered the role of the media and social networks (mis- dle part of a document is the least informative. This
iansfoornmeaotifonth, eintfhorremeamtioaninmparnoibpluemlatsioinn, tdhiessecmouinntartyio.nSoefeh:ohatxtepss): assumption was to some extent demonstrated on two
//www.cis.es/catalogo-estudios/resultados-definidos/barometros, ac- datasets in [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], who showed that removing the middle
cessed on May 28, 2024. part of a document was a successful strategy to truncate
long articles when fine-tuning a transformer model for
text classification. However, only one of the two datasets
consisted of news articles, and full details of these
experiments were not provided.
        </p>
        <p>
          Research has repeatedly demonstrated that fake news
tend to appeal to the emotions [
          <xref ref-type="bibr" rid="ref2">11, 2</xref>
          ], especially
negative emotions [12], with the intent of triggering a specific
reaction from the reader [13]. Several approaches have
taken this information into account in their models [14].
        </p>
        <p>One such approach is FakeFlow [15], a neural model
which was developed with the specific goal of learning
the flow of semantic and afective information in news
articles, and using this information for detecting fake news.</p>
        <p>In this paper, we adapt FakeFlow for Spanish and use
its inherently interpretable capabilities for analysing the
distinction between fake and true news, focusing on the
SFNC dataset. Unlike in English, the FakeFlow-based
classifier for Spanish underperforms when compared with a
RoBERTa-based classifier, but nevertheless ofers useful
insights on the data. We extensively analyse our results
both in relation to the afective information and the
structure of the news articles, and show why a generalisable
solution to fake news detection is hard to achieve.</p>
        <p>The rest of the paper is structured as follows: we
describe the FakeFlow approach and how we have adapted
it for Spanish in Section 2; we describe the
classification experiments in Section 3; we analyse and discuss
our findings in Section 4; and provide a conclusion and
directions for further research in Section 5.2
over the two classes (fake and true) is returned.3</p>
      </sec>
      <sec id="sec-1-2">
        <title>As input for the topic information branch, we have used</title>
        <p>the openly available word2vec [16] embeddings trained
on the Spanish Billion Words Corpus4 [17]. The afective
2. Approach information branch requires that each segment is
represented as a vector of lexical features. These vectors are
FakeFlow [15] was developed with the specific goal built from term frequency representations (based on
lexof learning the ‘flow’—i.e. the sequence of information ical resources), which are then normalised by the length
salience—of semantic and afective features in news ar- of the article. In order to adapt FakeFlow for Spanish, we
ticles. As shown in Figure 1, FakeFlow consists of two looked for lexical resources in Spanish to capture the
afcommunicating branches: on the one hand, the topic fective dimension of the news articles. We have extracted
information branch is built upon word embedding rep- the following features for each news segment:
resentations of the news segments, and is responsible for
capturing the semantics of the texts; on the other hand, • Emotion features: As in the original paper, we
the afective information branch is built upon vectors have used the NRC Emotion Lexicon [18, 19],5
of lexical features which represent the afective aspect which associates words with eight emotions:
of the texts. In order to model the flow of information, anger, fear, anticipation, trust, surprise, sadness,
each news article is first split in N segments. The neural joy, and disgust. After processing the lexicon, the
architecture of the model is designed to capture the joint Spanish lexicon consists of 11,326 words.6 Each
interaction between both the topic and afective
information in each segment, including a self-attention layer
which was added with the purpose of highlighting the
importance of a segment in relation to the neighbouring
segments in the news article. Finally, the two branches
are combined and, for a given text as input, a probability
emotion corresponds to a feature in our model (8 the association between words and features, and only
features). kept nouns, verbs, adjectives, and adverbs.
• Sentiment features: The same NRC Emotion</p>
        <p>Lexicon also contains associations between words 3. Experiments
(10,898 after processing) and sentiment polarity:
positive and negative. Each is assigned a feature We describe the dataset we have used in our investigation
in the model (2 features). in Section 3.1 and describe the classification experiments
• Hurtful language feature: The original Fake- and results in Section 3.2.</p>
        <p>
          Flow paper used the Moral Foundations
Dictionary [
          <xref ref-type="bibr" rid="ref12">21</xref>
          ] to extract a set of morality features.
        </p>
        <p>
          An examination of the content showed that this 3.1. Dataset
resource is strongly rooted in American culture. We have conducted our experiments on the Spanish Fake
Given that some of the categories in the dictio- News Corpus (SFNC)8 [
          <xref ref-type="bibr" rid="ref4 ref6">6, 4</xref>
          ]. The dataset consists of 1,543
nary are related to the willingness or unwilling- texts, and was compiled in two stages:
ness to hurt others, we decided to use HurtLex
instead, a validated lexicon of ofensive, aggressive, • The training and development sets were collected
and hateful words7 [
          <xref ref-type="bibr" rid="ref13">22</xref>
          ], which is more fitting as part of the first edition of the shared task [
          <xref ref-type="bibr" rid="ref5 ref6">6, 5</xref>
          ],
to the Spanish-speaking context. After process- and consist of 971 news articles (676 for
training, the lexicon of hurtful words consists of 2,008 ing and 295 for development) that were collected
words (1 feature). from 134 diferent media websites between
Jan• Hyperbolic language feature: In the original uary and July 2018, covering the following topics:
approach, a set of hyperbolic words was manually science, sports, economy, education,
entertainextracted from clickbait news headlines. We have ment, politics, health, security, and society.
translated these words into Spanish and have re- • The test set was collected for the second edition
moved uncommon words, obtaining a new list of of the shared task [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. It consists of 572 texts,
col292 words that are heavily loaded with a positive lected between November 2020 and March 2021,
or negative sentiment, such as ‘abrumador’ and in the midst of the Covid-19 pandemic. The test
‘sorprendente’ (1 feature). set consists of articles from four of the topics
al• Afective-semantic features: The original ready covered in the training and development
model uses the MRC psycholinguistic database sets (science, sports, politics and society) and
in[
          <xref ref-type="bibr" rid="ref14">23</xref>
          ] to characterise words in terms of their de- troduces three new topics: Covid-19,
environgree of abstractness and imageability. We have ment, and international. The texts come from a
replaced it with an equivalent resource in Spanish wide-range of sources and are written in diferent
[
          <xref ref-type="bibr" rid="ref15">24</xref>
          ], a dataset that not only provides the semantic varieties of Spanish.
information related to the degree of abstraction
but also characterises the intensity of the emotion
and its valence. After processing, this dictionary
consists of 1,400 words, each score normalised
between 0 and 1 in terms of their conveyance of
valence, arousal, concreteness, and imageability
(4 features).
        </p>
        <p>7https://github.com/valeriobasile/hurtlex. We have used only
words categorised as: negative stereotypes and ethnic slurs,
physical disabilities and diversity, cognitive disabilities and diversity,
moral and behavioural defects, words related to social and economic
disadvantage, words related to prostitution, words related to
homosexuality, words with potential negative connotations, derogatory
words, felonies and words related to crime and immoral behaviour,
and words related to the seven deadly sins of the Christian tradition.</p>
        <sec id="sec-1-2-1">
          <title>3.2. Classification Experiments</title>
        </sec>
      </sec>
      <sec id="sec-1-3">
        <title>Since the task is very similar to that for which Fake</title>
        <p>Flow was originally developed, we eventually used the</p>
      </sec>
      <sec id="sec-1-4">
        <title>8https://github.com/jpposadas/FakeNewsCorpusSpanish.</title>
        <p>same hyperparameters than for English:9 while we
experimented with diferent choices of hyperparameters,
we did not consistently obtain significant improvements.
We show the results of adapting FakeFlow for Spanish in
Table 2. For comparison, we provide the most common
class baseline (in this case, ‘true’) and a RoBERTa-based
baseline.10 While the original FakeFlow in English
surpassed transformer-based approaches [15], in our case
we see that the RoBERTa-based classifier is by far a
better performing approach, with a diference of 8 points in
macro F1-score with respect to the FakeFlow approach.
We also compare the FakeFlow model with two
simpliifed versions of the approach, using only a subset of the
features: (1) only emotions uses only the eight emotions
as features; (2) only negative considers only negative
features: ‘anger’, ‘disgust’, ‘fear’, ‘sadness’, ‘negative’ and
‘hurtful’. Neither of the simplified versions of FakeFlow
yield a better performance, in both cases providing a
lower 1 for the true class and a higher score for
the fake class. In the following sections, we show how a
careful inspection of the dataset, using the features and
the outputs of the FakeFlow model, can shed some new
light to help us interpret these results.</p>
      </sec>
      <sec id="sec-1-5">
        <title>9With the exception of the learning rate, which we set to 0.0001.</title>
        <p>
          10We have used an existing model available on the
HuggingFace hub: Narrativaai/fake-news-detection-spanish,
which was fine-tuned for text classification on the training and
development sets of the SFNC dataset. It is based on the
PlanTL-GOB-ES/roberta-large-bne model, pre-trained on the
largest existing corpus in Spanish [
          <xref ref-type="bibr" rid="ref16">25</xref>
          ]. This RoBERTa-based
classifier came out after the shared task, and outperforms all the
approaches that participated in the task. We have formatted the input
of the classifier according to the indications of the authors, by
concatenating the headline to the special token ‘[SEP]’ and the text.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4. Analysis and Discussion</title>
      <p>In this section, we analyse the diferences observed
between the fake and true subsets of the SFNC dataset
in terms of their distributions of the features used by
the FakeFlow classifier (in Section 4.1) and the attention
weights produced by the classifier (in Section 4.2), and
discuss our findings.</p>
      <sec id="sec-2-1">
        <title>4.1. Feature Analysis</title>
        <p>
          An analysis of the distribution of the features on the full
dataset reveals that fake news have on average higher
emotional content than true news, as is also reported
in previous works [11, 15]. In Figure 2, we show the
changing salience of the sixteen features over the article
segments. First, we see that, on average (i.e., the
dotted lines), fake news have higher values than true news,
and that, both in true and fake news, emotion values are
higher at the beginning of the article. This pattern is
particularly noticeable on some features, such as ‘anger’,
‘anticipation’, and ‘fear’. We observe that ‘negative’ is
more prominent in fake news, whereas the signal is less
clear for ‘positive’, in the same vein as previous work
indicates [11]. The ‘hurtful’ feature is consistently higher
in fake news, and so is ‘hyperbolic’, even though less
prominently. Finally, the four afective-semantic features
are higher in true news, except in the rfist segment. In
the case of ‘concreteness’ and ‘imageability’, this
information could be used to corroborate findings from previous
research, according to which fake news tend to require a
smaller cognitive efort from the reader in order to
process the content [11]. Further research using additional
lexical resources, such as [
          <xref ref-type="bibr" rid="ref17">26</xref>
          ], and on additional datasets 4.2. Attention Analysis
is needed to validate these insights and to investigate
these phenomena in more detail. The authors of the original FakeFlow paper [15] observed
        </p>
        <p>A breakdown of the dataset into topics reveals difer- that the model attended more at the beginning of the
arent distributions of features. In Figure 3, we show the ticles, concluding that in English most of the information
distribution of the eight emotions (plus ‘hurtful’) on three useful for discriminating between fake and true news
topics: politics, entertainment, and Covid-19.11 Here, we is presented at the beginning of the news articles. As
distinguish between the training set and the test set, to described in Section 2, the FakeFlow architecture has a
account both for the topic imbalance and also the difer- self-attention mechanism which highlights the
imporent temporal coverage of both subsets. First, focusing tance of a segment in relation to the other segments in
on the training set (i.e., the figures on the left, built from the news article. We therefore extracted the matrices of
news articles collected during the first half of 2018), we attention weights from the test set and averaged them.
observe that ‘hurtful’ stands out both on true and fake In Figure 4, we show the matrices of averaged attention
news on entertainment, whereas the other emotions are, weights of the texts that have been correctly classified
with the exception of ‘surprise’ and ‘joy’ on the fake as true and those correctly classified as fake. We can see
subset, less prominent than on politics. In general, the that, both in fake and true news, the model attends more
language of political news is more emotional, the main at the beginning of the article—as in English, according
diference between true and fake being higher ‘hurtful’, to [15]—whereas the middle part is less attended, and the
‘anger’, and ‘joy’ values in the latter. last part is the least attended. This suggests that the first</p>
        <p>On the test set (i.e., the figures on the right, built part of the articles is on average more useful for
discrimfrom news articles collected between November 2020 inating fake from true news, but further investigation on
and March 2021), we see that articles on Covid-19 have a the role of inter-segment attention with respect to the
very diferent distribution of features, especially patent classification decision is needed. Finally, our version of
in the case of ‘fear’ and ‘sadness’, which stand out both FakeFlow allows the user to inspect the diferent articles
for true and fake news. Fake news on Covid-19 have con- in the dataset, and visualise how much the classifier
atsistently higher values than true news for all emotions tended to each of the segments in the article, as illustrated
and for hurtful language. Finally, it is interesting to note in Figure 5.
the evolution of the emotional content in political news In order to validate whether these findings are specific
from 2018 (i.e., the training set) to 2020–2021 (i.e., the of the FakeFlow approach or more generic, we have done
test set): we see true news becoming more similar to fake a text-based ablation study using the RoBERTa classifier.
news, particularly with respect to ‘anger’ and ‘hurtful’. In Table 3, we observe how the performance of the
clasGiven that almost three convulsed years span between sifier changes depending on the input that is provided:
the articles in both datasets, it remains to be investigated ‘first part’ contains only segments 1 to 3 (i.e., the first part
whether this diference is specific of this dataset or it is of the articles), ‘middle part’ includes segments 4 to 7,
the product of the larger question of whether, in recent and ‘last part’ includes segments 8 to 10. For comparison,
years, the style of mainstream news has become more we provide the full text (excluding the headline, hence
similar to that of fake news, perhaps as a strategy to the diference with respect to Table 2) and the full text
compete with the appeal of fake news. removing either the middle or the last part. It is
important to note the drop of  1 of the true class for
individual parts, whereas the  1 of the fake class
is less (or not) afected. The results show that the middle
and last parts are clearly less informative, and we show
11Topics were selected based on their frequency in the dataset: that removing the last part does not afect the
classifica‘mpoolsittipcso’piuslathretompiocstinptohpeultararitnoipnigc aonvderdaellv,e‘leonptmeretnaitnsmetesn(tb’eissidthees tion (and even improves the  1 of the fake class).
‘politics’), and ‘Covid-19’ is the most popular topic in the test set. This is an interesting finding, not only from a scientific
perspective (as it aligns with the well-known inverted
pyramid approach to writing news articles), but also for
practical application.12</p>
        <p>12Language models usually truncate long input sequences to a
maximum length. A better understanding of which parts of the
article are more likely to be informative may be a safe strategy to
reduce time and computational costs, especially when processing
large amounts of text, since the quadratic complexity of transformers
means that, the longer the text, the longer the time to process it.
(a) True news.</p>
        <p>(b) Fake news.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Conclusions</title>
      <p>
        While transformer-based models are the state-of-the-art
approaches for text classification, it is notorious that they
sufer from interpretability issues [
        <xref ref-type="bibr" rid="ref18">27</xref>
        ], making it dificult
to gain scientifically-interesting new insights on why
texts are classified in one way or another. In this paper,
we have used an adapted version of an inherently
interpretable model, FakeFlow, to inspect the SFNC dataset
with the aim of gaining a better understanding of the
relation between fake news and afective language, with
(a) Fake news
(b) True news
      </p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <sec id="sec-4-1">
        <title>We are grateful to the reviewers for their careful and</title>
        <p>constructive reviews. We would also like to thank Damir
Korenc˘ić and Ivan Grubis˘ić for their feedback.</p>
        <p>This work was funded by the research project</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Lazer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Baum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Benkler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Berinsky</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Greenhill</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Menczer</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          <string-name>
            <surname>Metzger</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Nyhan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Pennycook</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Rothschild</surname>
          </string-name>
          , et al.,
          <source>The science of fake news, Science</source>
          <volume>359</volume>
          (
          <year>2018</year>
          )
          <fpage>1094</fpage>
          -
          <lpage>1096</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Rufo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Semeraro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Giachanou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <article-title>Studying fake news spreading, polarisation dynamics, and manipulation by bots: A tale of networks and language</article-title>
          , Computer science review
          <volume>47</volume>
          (
          <year>2023</year>
          )
          <fpage>100531</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zafarani</surname>
          </string-name>
          ,
          <article-title>A survey of fake news</article-title>
          : Fun- Berlin, Heidelberg,
          <year>2019</year>
          , p.
          <fpage>194</fpage>
          -
          <lpage>206</lpage>
          . doi:
          <volume>10</volume>
          .1007/ damental theories,
          <source>detection methods, and opportu- 978-3-030-32381-3_16. nities, ACM Computing Surveys (CSUR) 53</source>
          (
          <year>2020</year>
          ) [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Carrasco-Farré</surname>
          </string-name>
          ,
          <article-title>The fingerprints of misinforma1-40. tion: how deceptive content difers from reliable</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Gómez-Adorno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Posadas-Durán</surname>
          </string-name>
          , G. B.
          <article-title>En- sources in terms of cognitive efort and appeal to guix, C. P. Capetillo, Overview of FakeDes at Iberlef emotions, Humanities and Social Sciences Commu2021: Fake news detection in Spanish shared task</article-title>
          , nications
          <volume>9</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          .
          <source>Procesamiento del lenguaje natural 67</source>
          (
          <year>2021</year>
          )
          <fpage>223</fpage>
          - [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vosoughi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Aral,</surname>
          </string-name>
          <article-title>The spread of true and 231. false news online</article-title>
          , science
          <volume>359</volume>
          (
          <year>2018</year>
          )
          <fpage>1146</fpage>
          -
          <lpage>1151</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Aragón</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Jarquín-Vásquez</surname>
          </string-name>
          , M. Montes-y [13]
          <string-name>
            <given-names>V.</given-names>
            <surname>Bakir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>McStay</surname>
          </string-name>
          ,
          <article-title>Fake news and the economy Gómez, H</article-title>
          . J.
          <string-name>
            <surname>Escalante</surname>
            ,
            <given-names>L. V.</given-names>
          </string-name>
          <string-name>
            <surname>Pineda</surname>
            , H. Gómez- of emotions: Problems, causes, solutions, Digital Adorno,
            <given-names>J. P.</given-names>
          </string-name>
          <string-name>
            <surname>Posadas-Durán</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Bel-Enguix</surname>
          </string-name>
          , journalism
          <volume>6</volume>
          (
          <year>2018</year>
          )
          <fpage>154</fpage>
          -
          <lpage>175</lpage>
          . Overview of MEX-A3T at IberLEF 2020: Fake news [14]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Thompson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          , and
          <article-title>aggressiveness analysis in Mexican Spanish</article-title>
          .,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ananiadou</surname>
          </string-name>
          , Emotion detection for misinformain:
          <source>IberLEF@ SEPLN</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>222</fpage>
          -
          <lpage>235</lpage>
          . tion: A review,
          <source>Information Fusion</source>
          (
          <year>2024</year>
          )
          <fpage>102300</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.-P.</given-names>
            <surname>Posadas-Durán</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gómez-Adorno</surname>
          </string-name>
          , G. Sidorov, [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ghanem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Ponzetto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <surname>FakeJ. J. M. Escobar</surname>
          </string-name>
          ,
          <article-title>Detection of fake news in a new cor- Flow: Fake news detection by modeling the flow of pus for the Spanish language</article-title>
          ,
          <source>Journal of Intelligent afective information, in: Proceedings of the 16th &amp; Fuzzy Systems</source>
          <volume>36</volume>
          (
          <year>2019</year>
          )
          <fpage>4869</fpage>
          -
          <lpage>4876</lpage>
          .
          <article-title>Conference of the European Chapter of the Associ-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Pöttker</surname>
          </string-name>
          ,
          <article-title>News and its communicative quality: ation for Computational Linguistics: Main Volume, the inverted pyramid-when and why did it appear</article-title>
          ?,
          <year>2021</year>
          , pp.
          <fpage>679</fpage>
          -
          <lpage>689</lpage>
          .
          <source>Journalism Studies</source>
          <volume>4</volume>
          (
          <year>2003</year>
          )
          <fpage>501</fpage>
          -
          <lpage>511</lpage>
          . [16]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , I. Sutskever,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Corrado</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xiong</surname>
          </string-name>
          , S. Jiang,
          <string-name>
            <surname>GDUF-DM at FakeDeS J. Dean</surname>
          </string-name>
          ,
          <article-title>Distributed representations of words and 2021: Spanish fake news detection with BERT and phrases and their compositionality</article-title>
          , in: C.
          <article-title>Burges, sample memory</article-title>
          , in: CEUR Workshop proceedings: L.
          <string-name>
            <surname>Bottou</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Welling</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Ghahramani</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. WeinIberian Languages Evaluation Forum</surname>
          </string-name>
          ,
          <year>2021</year>
          , pp.
          <fpage>621</fpage>
          -
          <lpage>berger</lpage>
          (Eds.),
          <source>Advances in Neural Information Pro629. cessing Systems</source>
          , volume
          <volume>26</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <year>2013</year>
          .
          <article-title>Pre-training of deep bidirectional transformers for</article-title>
          [17]
          <string-name>
            <given-names>C.</given-names>
            <surname>Cardellino</surname>
          </string-name>
          ,
          <article-title>Spanish Billion Words Corpus language understanding</article-title>
          , in: J.
          <string-name>
            <surname>Burstein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Do-</article-title>
          and
          <string-name>
            <surname>Embeddings</surname>
          </string-name>
          ,
          <year>2019</year>
          . URL: https://crscardellino. ran, T. Solorio (Eds.),
          <source>Proceedings of the 2019</source>
          Con
          <article-title>- github</article-title>
          .io/SBWCE/. ference of the North American Chapter of the As- [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mohammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Turney</surname>
          </string-name>
          ,
          <article-title>Emotions evoked by comsociation for Computational Linguistics: Human mon words and phrases: Using Mechanical Turk to Language Technologies, Volume 1 (Long and Short create an emotion lexicon</article-title>
          ,
          <source>in: Proceedings of the Papers)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          ,
          <source>NAACL HLT 2010 Workshop on Computational Minneapolis</source>
          , Minnesota,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          .
          <article-title>Approaches to Analysis and Generation of Emo-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>How to fine- tion in Text, Association for Computational Lintune bert for text classification?</article-title>
          , in: Chinese guistics, Los Angeles, CA,
          <year>2010</year>
          , pp.
          <fpage>26</fpage>
          -
          <lpage>34</lpage>
          . URL: Computational Linguistics: 18th China National https://aclanthology.org/W10-0204. Conference,
          <string-name>
            <surname>CCL</surname>
          </string-name>
          <year>2019</year>
          , Kunming, China, Octo- [19]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Mohammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Turney</surname>
          </string-name>
          ,
          <source>Crowdsourcing a ber 18-20</source>
          ,
          <year>2019</year>
          , Proceedings, Springer-Verlag,
          <article-title>word-emotion association lexicon</article-title>
          ,
          <source>Computational Intelligence</source>
          <volume>29</volume>
          (
          <year>2013</year>
          )
          <fpage>436</fpage>
          -
          <lpage>465</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Honnibal</surname>
          </string-name>
          , I. Montani,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Landeghem</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Boyd, spacy: Industrial-strength natural language processing in python</article-title>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .5281/zenodo. 1212303.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Graham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Haidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Nosek</surname>
          </string-name>
          ,
          <article-title>Liberals and conservatives rely on diferent sets of moral foundations</article-title>
          .,
          <source>Journal of personality and social psychology 96</source>
          (
          <year>2009</year>
          )
          <fpage>1029</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>E.</given-names>
            <surname>Bassignana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Basile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Patti</surname>
          </string-name>
          ,
          <article-title>Hurtlex: A multilingual lexicon of words to hurt</article-title>
          ,
          <source>in: CEUR Workshop proceedings: Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-It)</source>
          , volume
          <volume>2253</volume>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <article-title>MRC psycholinguistic database: Machine-usable dictionary</article-title>
          , version
          <volume>2</volume>
          .00, Behavior research methods, instruments, &amp; computers
          <volume>20</volume>
          (
          <year>1988</year>
          )
          <fpage>6</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Guasch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ferré</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Fraga</surname>
          </string-name>
          ,
          <article-title>Spanish norms for afective and lexico-semantic variables for 1,400 words</article-title>
          ,
          <source>Behavior Research Methods</source>
          <volume>48</volume>
          (
          <year>2016</year>
          )
          <fpage>1358</fpage>
          -
          <lpage>1369</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gutiérrez-Fandiño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Armengol-Estapé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pàmies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Llop-Palao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Silveira-Ocampo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. P.</given-names>
            <surname>Carrino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Armentano-Oller</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>RodriguezPenagos, A</article-title>
          .
          <string-name>
            <surname>Gonzalez-Agirre</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Villegas</surname>
          </string-name>
          ,
          <article-title>Maria: Spanish language models</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>68</volume>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>García-Díaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Vivancos-Vicente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Almela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Valencia-García</surname>
          </string-name>
          ,
          <article-title>Umutextstats: A linguistic feature extraction tool for spanish</article-title>
          ,
          <source>in: Proceedings of the Thirteenth Language Resources and Evaluation Conference</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>6035</fpage>
          -
          <lpage>6044</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Madsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chandar</surname>
          </string-name>
          ,
          <article-title>Post-hoc interpretability for neural nlp: A survey</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>55</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>