<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>UninaStudents @ SardiStance: Stance Detection in Italian Tweets - Task A</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maurizio Moraca</string-name>
          <email>mau.moraca@studenti.unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianluca Sabella</string-name>
          <email>gia.sabella@studenti.unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simone Morra</string-name>
          <email>simone.morra2@studenti.unina.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universita` degli Studi di Napoli Federico II</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>English. This document describes a classification system for the SardiStance task at EVALITA 2020. The task consists in classifying the stance of the author of a series of tweets towards a specific discussion topic. The resulting system was specifically developed by the authors as final project for the Natural Language Processing class of the Master in Computer Science at University of Naples Federico II. The proposed system is based on an SVM classifier with a radial basis function as kernel making use of features like 2 chargrams, unigram hashtag and Afinn weight computed on automatic translated tweets. The results are promising in that the system performances are on average higher than that of the baseline proposed by the task organizers.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Italiano. Questo documento descrive
un sistema di classificazione per il task
SardiStance di EVALITA 2020. Il task
consiste nel classificare la posizione
dell’autore di una serie di tweets nei
confronti di uno specifico topic di discussione.
Il sistema risultante e` stato specificamente
sviluppato dagli autori come progetto
finale per il corso di Elaborazione del
Linguaggio Naturale nell’ambito del corso di
laurea magistrale in Informatica presso
l’universita` degli studi di Napoli Federico
II. Il sistema qui proposto si basa su un
classificatore SVM con una funzione
radiale di base come kernel facendo uso di
feaCopyright © 2020 for this paper by its authors. Use
permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
tures come 2 char-grams, unigram
hashtag e l’Afinn weight calcolato sui tweet
tradotti in automatico. I risultati sono
promettenti in quanto le performance sono
in media superiori rispetto a quelle della
baseline proposta dagli organizzatori del
task.</p>
    </sec>
    <sec id="sec-2">
      <title>1 Introduction</title>
      <p>
        This work reports on the application of our
system for the resolution of the EVALITA 2020’s
SardiStance task
        <xref ref-type="bibr" rid="ref3 ref6 ref9">(Basile et al., 2020; Cignarella
et al., 2020)</xref>
        . Stance detection is a classification
task aiming at determining the position (stance)
of the author of a given text concerning the topic
(target) treated in the text itself. In other words,
the challenge deals with automatically guessing
if the author of the text is in favour, against or
is in a neutral position towards the topic subject
of a given post. The utility of such an automatic
system can be found in political analysis,
marketing and opinion mining. Automatic determination
of Stance is a new approach to opinion mining
paradigm which finds better application in social
and political applications. It is quite different
form in which sentiment analysis in many views,
but the main difference is the drastic reduction to
a three class decision system (in favour, against,
neutral) given its main fields of application. The
challenge poses many challenges, as the real target
might not be expressly cited in the text or could
bear a not so clear expression of the author’s
opinion like in the following example
        <xref ref-type="bibr" rid="ref6 ref9">(Lai et al., 2020)</xref>
        :
      </p>
      <sec id="sec-2-1">
        <title>Target: Donald Trump</title>
        <p>Tweet: Jeb Bush is the only sane candidate in this
republican lineup.</p>
        <p>
          Although one could erroneously think that
this task is similar to sentiment analysis, the
following example illustrates how, in some cases,
stance detection results are opposed to those
reached by sentiment analysis
          <xref ref-type="bibr" rid="ref6 ref9">(Lai et al., 2020)</xref>
          :
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Target: Climate change is a real concern</title>
        <p>Tweet: @RegimeChangeBC @ndnstyl It’s sad to
be the last generation that could change but does
nothing. #Auspol
This tweet presents a negative polarity, although
the author claims to be in favour of the target.
Classification systems for stance detection, then,
attempt the individuation of the author position on
the target taking into account of features obtained
by the text that are almost similar to those used
in hate speech detection, irony detection, mood
detection, but with some further effort devoted to
the specificity of the task.</p>
        <p>SardiStance is the first Italian Initiative focused
on the automatic classification of stance in tweets.
It includes two different tasks: A) Stance
Detection at a textual level, where tasl participants are
asked to resolve the guess basing only on the tweet
textual content, and B) Stance Detection with the
addition of contextual information about the tweet,
such as the number of retweets, the number of
favours or the date of posting; contextual
information about the author, location, user’s biography);
we proposed runs only for task A). As required
by the task proposal, task A requires a three-class
classification process where the system has to
predict whether the items in the set are in FAVOUR,
AGAINST or NEUTRAL exploiting the text of the
tweet.
2</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Description of the System</title>
      <p>
        The system is based on a SVM classifier with a
radial basis function (rbf) kernel. Most of the
features selected were inspired by
        <xref ref-type="bibr" rid="ref6 ref9">(Lai et al., 2020)</xref>
        and correspond to the following ones:
• n-grams, bag of n consecutive words
in binary representation (presence/absence)
where n corresponds to 1, 2 or 3.
• char-grams, bag of n consecutive characters
in binary representation (presence/absence)
where n corresponds to 2, 3, 4 or 5.
• unigram hashtag, bag of hashtags in binary
representation (presence/absence).
• unigram emoji, bag of emojis in binary
representation (presence/absence)
• unigram mentions, bag of mentions in binary
representation (presence/absence).
• num uppercase words, number of uppercase
words in a tweet.
• punctuation marks, frequency of each
punctuation mark (. , ; ! ?) and their total
frequency.
• Afinn weight1
        <xref ref-type="bibr" rid="ref12">(Nielsen, 2011)</xref>
        , based on a
sentiment analysis lexicon made up of 3500
English words manually annotated with a
polarity value within the range [-5, +5]. The
value of this feature is computed for each
tweet as the sum of the polarities associated
to the words constituting the tweet translated
to English via Google Translate.
• Hu&amp;Liu weight2, based on a sentiment
analysis lexicon composed of two separated lists
of English words, where the first one contains
2,006 words with a positive connotation, and
the second one contains 4,783 words with a
negative connotation. In this work, a value of
+1 is given to words which overlap with the
positive ones in the lexicon and a value of -1
to the ones overlapping with the negative list.
The total polarity of each tweet is computed
as the sum of the weights given to the words
in a tweet.
• NRC vector3
        <xref ref-type="bibr" rid="ref4">(Bravo-Marquez et al., 2019)</xref>
        ,
based on a lexicon consisting in a list of
English words, each of which is associated to
the most representative emotion. The
emotion which are comprised are anger, fear,
expectancy, trust, surprise, sadness, joy, and
disgust. Furthermore, to each sample, a score
indicating the emotion intensity is also
associated. This score has a value within the
range [0, 1].
• DPL vector4
        <xref ref-type="bibr" rid="ref5">(Castellucci et al., 2016)</xref>
        ,
based on a lexicon of 75,021 pairs of
1https://github.com/fnielsen/afinn/tree/master/afinn/data
2https://github.com/woodrad/Twitter-SentimentMining/tree/master/Hu%20and%20Liu%20Sentiment%
20Lexicon
3http://saifmohammad.com/WebPages/AffectIntensity.htm
4http://sag.art.uniroma2.it/demo-software/distributionalpolarity-lexicon/
      </p>
      <p>The best settings obtained on the validation set
data correspond to C = 10 e = 0:001.
3</p>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <p>
        In this section the performances of our system
obtained during the test phase on the validation and
test set are described. The validation set was
obtained extracting a sample of tweets from the
training set via the Stratified Sampling algorithm
selecting the 20% of the training set. The evaluation
metrics used are the mean value of the F1 score for
the classes Against and Favour, Precision, Recall
and F1 score for each class, and Accuracy. In table
3, the results obtained from the validation set are
shown. From these results, the mean F1 score is
obtained, corresponding to 0.5200. In table 3, the
results obtained from the test set are presented.
lemma::pos tag associated to scores
indicating the level of positivity, negativity, and
neutrality of the lemma, as it follows
For each tweet of the dataset, each word
was lemmatised and, for each resulting
lemma, a morpho-syntactic category was
associated. For this kind of analysis LinguA
        <xref ref-type="bibr" rid="ref1 ref1 ref1 ref2 ref2 ref2 ref7 ref7">(Dell’Orletta, 2009; Attardi and Dell’Orletta,
2009; Attardi et al., 2009)</xref>
        was used. The
DPL vector feature consists of a triplet of
scores representing positivity, negativity, and
neutrality levels in the tweet. To obtain this
value, the scores of each pair lemma::pos tag
in a tweet were summed.
      </p>
      <p>In order to select the best features combination,
a wrapper-based feature selection algorithm was
used to test all the possible features combinations.
The best one resulting from the collected
performance on the validation set was chosen, that is
the one combining 2 char-grams, unigram
hashtag and Afinn weight. The evaluation metrics are
discussed in the next section (Section 3). Since a
SVM classifier with an RBF kernel was used, it
was important to tune the C and parameters.</p>
      <p>To set the complexity of a generic SVM model,
C is used: this parameter controls the
acceptable distance of the decision boundary in the
ndimensional features space from the support
vectors. A higher C complexity value increases the
model’s complexity, thus reducing the acceptable
distance but also increasing the risk of overfitting;
a lower C value leads to more general models that
may have reduced discrimination capability. The
parameter is specific for the RBF kernel. This
parameter controls the influence single points have
in the features space and controls the smoothness
of the model, with lower values of leading to
smoother models and vice-versa. SVMs are very
sensitive to parameters tuning so specific
optimisation strategies must be adopted. In this case,
a grid search was performed using the following
ranges of values:
• C [0.1, 0.2, . . . , 1.0, 10, 100, 1000]
• Gamma [0.001, 0.0009, 0.0008, . . . , 0.0001]
F1 Score
0.6600
0.3100
0.0900</p>
      <p>
        In table 3, on the other hand, the results are
compared with the baseline proposed by the task
organizers and the winning systems whose runs
were submitted by the UNITOR team
        <xref ref-type="bibr" rid="ref8">(Giorgioni et al., 2020)</xref>
        for task A. Specifically, the
baseline used a SVM classifier based on token
uni-gram features, whereas UNITOR used
UmBERTo5, adding sentiment, hate and irony tags to
the dataset sentences and using additional data to
train their systems. As it may be noted, the against
class result for our system is higher than the
baseline and not so different from the first two runs of
UNITOR. Further investigations are, conversely,
needed as far as the other two classes are
concerned.
4
      </p>
    </sec>
    <sec id="sec-5">
      <title>Discussion</title>
      <p>
        Our results are conditioned by the use of a training
set originally in English and translated into Italian
for our purposes, and, in particular, for the
derivation of the Afinn weight features. As expected, the
translation, made via Google translate is, in some
cases poor and approximate, and can give rise to
a significant level of ambiguity, however we
decided to afford this risk, translating directly the
tweets, instead of the lexicon, as we thought that in
this last case the ambiguity could have been even
greater, we just hoped that automatic translation is
by far more uncertain because of polysemy, lack
of flexive morphological information, and
similar problems, as automatic translation skills are
trained to solve at least at a first level of
aproximation. In this view the use of an imperfect
translation, however, is able to capture part of the
semantic context in the texts, allowing us not to recur to
lemmatization and further processes on the
lexicon before translation. We choose to use a
classic approach based on an SVM classifier in order
to make our results explainable, given the scholar
context in which this experience is grown. This
possibility would have been impossible if we had
used Deep Neural Networks, whose processes are
not ”readable” from an external point of view.
Furthermore, the size of the data-set distributed for
this challenge does not consent an affordable
training with these systems. In this view, a
comparison of results obtained in other stance detection
challenges, similar to that proposed here in Evalita
        <xref ref-type="bibr" rid="ref10 ref11 ref13">(Mohammad et al., 2016; Taule´ et al., 2017; Lai
et al., 2017)</xref>
        , give strength to our choice
concerning the use of SVM that often outperform DNNs.
As Master students, we approached these NLP
topics for the first time. Therefore, we are aware
5https://huggingface.co/Musixmatch/umbertocommoncrawl-cased-v1
that our results are not at the state of the art in the
field. However, a comparison with average
performances in similar tasks for languages different
from English indicates performances that are not
significantly different.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>
        We thank our teachers Francesco Cutugno and
Maria Di Maro for letting us approach with NLP
and EVALITA 2020
        <xref ref-type="bibr" rid="ref3">(Basile et al., 2020)</xref>
        and for
supporting us in our work. We also thank them for
giving us the opportunity to take part to the
competition and for encouraging us to do our best.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Attardi</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Dell'Orletta</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Reverse revision and linear tree combination for dependency parsing</article-title>
          .
          <source>In Proceedings of Human Language Technologies</source>
          :
          <article-title>The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics</article-title>
          , Companion Volume:
          <source>Short Papers</source>
          , pages
          <fpage>261</fpage>
          -
          <lpage>264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Attardi</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dell'Orletta</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Simi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Turian</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Accurate dependency parsing with a stacked multilayer perceptron</article-title>
          .
          <source>Proceedings of EVALITA</source>
          ,
          <volume>9</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Basile</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Croce</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di</surname>
            <given-names>Maro</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , and
            <surname>Passaro</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. C.</surname>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Evalita 2020: Overview of the 7th evaluation campaign of natural language processing and speech tools for italian</article-title>
          . In Basile, V.,
          <string-name>
            <surname>Croce</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di</surname>
            <given-names>Maro</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , and
            <surname>Passaro</surname>
          </string-name>
          , L. C., editors,
          <source>Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA</source>
          <year>2020</year>
          ),
          <article-title>Online</article-title>
          . CEUR.org.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Bravo-Marquez</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frank</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pfahringer</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Mohammad</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Affectivetweets: a weka package for analyzing affect in tweets</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          ,
          <volume>20</volume>
          (
          <issue>92</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Castellucci</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Croce</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Basili</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>A language independent method for generating large scale polarity lexicons</article-title>
          .
          <source>In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)</source>
          , pages
          <fpage>38</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Cignarella</surname>
            ,
            <given-names>A. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lai</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosco</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patti</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>SardiStance@EVALITA2020: Overview of the Task on Stance Detection in Italian Tweets</article-title>
          . In
          <string-name>
            <surname>Basile</surname>
          </string-name>
          , V.,
          <string-name>
            <surname>Croce</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di</surname>
            <given-names>Maro</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , and
            <surname>Passaro</surname>
          </string-name>
          , L. C., editors,
          <source>Proceedings of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA</source>
          <year>2020</year>
          ).
          <article-title>CEUR-WS.org</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Dell'Orletta</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Ensemble system for part-ofspeech tagging</article-title>
          .
          <source>Proceedings of EVALITA</source>
          ,
          <volume>9</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Giorgioni</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Politi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Croce</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Basili</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>UNITOR@Sardistance2020: Combining Transformer-based architectures and Transfer Learning for robust Stance Detection</article-title>
          . In
          <string-name>
            <surname>Basile</surname>
          </string-name>
          , V.,
          <string-name>
            <surname>Croce</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di</surname>
            <given-names>Maro</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , and
            <surname>Passaro</surname>
          </string-name>
          , L. C., editors,
          <source>Proceedings of the 7th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian (EVALITA</source>
          <year>2020</year>
          ). CEURWS.org.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Lai</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cignarella</surname>
            ,
            <given-names>A. T.</given-names>
          </string-name>
          , Far´ıas,
          <string-name>
            <given-names>D. I. H.</given-names>
            ,
            <surname>Bosco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Patti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            , and
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Multilingual stance detection in social media political debates</article-title>
          .
          <source>Computer Speech &amp; Language</source>
          , page
          <volume>101075</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Lai</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cignarella</surname>
            , Alessandra Teresa,
            <given-names>H. F. D. I.</given-names>
          </string-name>
          , et al. (
          <year>2017</year>
          ). itacos at ibereval2017:
          <article-title>Detecting stance in catalan and spanish tweets</article-title>
          .
          <source>In IberEval</source>
          <year>2017</year>
          , volume
          <year>1881</year>
          , pages
          <fpage>185</fpage>
          -
          <lpage>192</lpage>
          . CEUR-WS. org.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Mohammad</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kiritchenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sobhani</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Cherry</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Semeval-2016 task 6: Detecting stance in tweets</article-title>
          .
          <source>In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)</source>
          , pages
          <fpage>31</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Nielsen</surname>
            ,
            <given-names>F. A˚.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>A new ANEW: evaluation of a word list for sentiment analysis in microblogs</article-title>
          . In Rowe,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Stankovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Dadzie</surname>
          </string-name>
          , A.-S., and
          <string-name>
            <surname>Hardey</surname>
          </string-name>
          , M., editors,
          <source>Proceedings of the ESWC2011 Workshop on 'Making Sense of Microposts': Big things come in small packages</source>
          , volume
          <volume>718</volume>
          <source>of CEUR Workshop Proceedings</source>
          , pages
          <fpage>93</fpage>
          -
          <lpage>98</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Taule´</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Mart´ı,
          <string-name>
            <given-names>M. A.</given-names>
            ,
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. M.</given-names>
            ,
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Bosco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Patti</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , et al. (
          <year>2017</year>
          ).
          <article-title>Overview of the task on stance and gender detection in tweets on catalan independence at ibereval 2017</article-title>
          .
          <source>In 2nd Workshop on Evaluation of Human Language Technologies for Iberian Languages, IberEval</source>
          <year>2017</year>
          , volume
          <year>1881</year>
          , pages
          <fpage>157</fpage>
          -
          <lpage>177</lpage>
          . CEUR-WS.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>