<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Aspie96 at NEGES (IberLEF 2019): Negation Cues Detection in Spanish with Character-Level Convolutional RNN and Tokenization</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Turin</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>342</fpage>
      <lpage>351</lpage>
      <abstract>
        <p>This paper describes the model used by the Aspie96 team in the subtask A of NEGES 2019 (part of IberLEF 2019), aimed at the automatic detection of negation cues in Spanish. The approach is based on a neural network which uses character-level features to compute tokenlevel features.</p>
      </abstract>
      <kwd-group>
        <kwd>negation</kwd>
        <kwd>NEGES</kwd>
        <kwd>Spanish</kwd>
        <kwd>natural language processing</kwd>
        <kwd>neural network</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        In the shared task NEGES, described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and part of IberLEF 2019, two
subtasks were proposed: subtask A (negation cues detection) and subtask B
(role of negation in sentiment analysis). The Aspie96 team took part in subtask
A, which required participants to develop a system capable of identifying the
negation cues present in a document. The NEGES task had been previously
presented in 2018, as described in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] (the subtask A of NEGES 2019 corresponds
to task 1 in NEGES 2018).
      </p>
      <p>Each negation may have been an individual word, or composed of multiple
(even non-contiguous) words. In the provided training and testing datasets,
documents were already tokenized. Tokens were words non words (for instance, in
the case of punctuation). For each document, PoS-tags and lemmas were also
provided.</p>
      <p>
        The data used in the task was from the SFU ReviewSP-NEG corpus,
presented in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], consisting of reviews, in Spanish, from 8 di erent domains (cars,
hotels, washing machines, books, cell phones, music, computers and movies).
      </p>
      <p>The main characteristic of the presented model, described in the next section,
is that it is not based on word-level features, nor on any kind of knowledge which
isn't strictly automatically extracted from the provided training dataset.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Method</title>
      <p>
        The model used by the Aspie96 team was based upon the system presented in
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] in the context of IronITA (EVALITA 2018), a shared task for irony detection
in tweets in Italian described in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]: a neural network able to classify short texts
without making use of word-level features or pretrained layers.
      </p>
      <p>
        It had been completely developed from scratch by the author of this paper
(and of [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) for his thesis on the classi cation of tweets in English and Italian.
The thesis, in Italian, is not currently publicly available, but the code originally
used is1.
      </p>
      <p>It constitutes the basis of the system used in the subtask A of NEGES, thus
it is crucial to understand it rst. A representation is provided in Figure 1.</p>
      <p>Its input was represented as a list with xed length (leading to padding,
added to the left, or truncation, to the right, where needed) of sparse vectors.
Each vector of the list represented an individual character of the tweet and
contained ags whose values were either 0 or 1. Most of the ags were mutually
exclusive and were used to identify a character among a list of known ones.
Additional ags were used to represent additional properties of the character
(such as being uppercase).
1 Under Expat License: https://github.com/Aspie96/ThesisValentinoGiudice2018.</p>
      <p>
        A series of three unidimensional convolutional layers, each with a width of 3
and an output depth of 8, convolving along the length of the tweet, was used to
reduce the size of such a representation and to encode characters in a way which
accounted for the context provided by the neighbouring characters. The output
of the last convolutional layer, still a temporal sequence of xed-size vectors, was
used as input to a bidirectional GRU layer, originally described in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], (similar
to the LSTM architecture, originally described in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and then improved in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ])
whose role was to produce an individual vector representation of the whole text.
The vector obtained in this way could, therefore, be the input of a simple binary
classi er: an individual dense layer was used for this purpose.
      </p>
      <p>
        The described neural network, as presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], had been developed for
the English language (although its application on tweets in English was outside
the scope of [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]) and the Italian language. Because some characters exist in
the Spanish language that don't in Italian or English, this required the neural
network to be adapted for the Spanish language, leading to a slightly di erent
input representation, as described in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>The full list of known characters in the used representation was the following:
Space ! " # $ % &amp; ' ( ) * + , - . / 0 1 2 3 4 5 6 7
8 9 : ; = ? @ [ ] \_ a b c d e f g h i j k l m n o p
q r s t u v w x y z | ~</p>
      <p>Emojis were represented similarly to their Unicode name (in English), with
additional ags.</p>
      <p>
        The full list of additional ags, as described in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], was:
Uppercase letter Indicating whether the character is an uppercase letter.
Accent Indicating whether the character is an accented wovel, regardless of the
accent being acute or grave.
      </p>
      <p>Emoji Indicating whether the character is part of the Unicode name of an
emoji.</p>
      <p>Emoji start Indicating whether the character is the rst in the unicode name
of an emoji.</p>
      <p>Letter Indicating whether the character is a letter.</p>
      <p>Number Indicating whether the character is a numerical digit.
Inverted Indicating if the character is an opening inverted question mark or
exclamation mark.</p>
      <p>Tilde Indicating whether the character is an N with virgulilla.</p>
      <p>
        The neural network presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], except for its last layer, the
classi er, will, from now on, be referred to as networkA and constitutes the basis
for the model used for the subtask A of NEGES. Thus, the last layer of networkA
is the recurrent one and its output constitutes an individual vector representation
of the text given in input. It must be noted that it has been slightly tweaked
between di erent tasks (not always out of necessity, but resulting in several
slightly di erent versions): the di erences, however, are minor.
      </p>
      <p>The subtask A of NEGES was much di erent from tweets classi cation: the
texts were much longer than individual tweets and they were not to be classi ed:
instead, elements within the text were to be recognized.</p>
      <p>The problem was considered by the Aspie96 team as a problem of classi
cation of words within the text: each word had to be classi ed as either:
{ Not part of a negation cue, at all (118441 instances in the training set).
{ The rst word of a new negation cue (2490 instances in the training set).
{ Part of the latest started negation cue (597 instances in the training set).</p>
      <p>This assumed a negation cue containing multiple words, although possibly
not contiguous, could not contain, between its rst and last words, any part of
other negation cues. This was mostly the case in the training set: only 26 words
constituted exceptions to this rule as they were non beginning parts of negations
cues and they were not part of the last began negation cue.</p>
      <p>In the input texts, already tokenized, each token was marked as either a word
or not a word (according to a simple match with a regular expression): words
were the main focus of the task as they were the ones which had to be classi ed.</p>
      <p>The representation of the text was built as follows.</p>
      <p>In the text, spaces (which were not provided) were inserted back, simply by
putting one between every two tokens. This might not have been accurate (it
often is not in the case of punctuation as, for instance, commas are not usually
preceded by a space), but, because the network was not pretrained and all the
knowledge about the language had to be gained from scratch, this was irrelevant.</p>
      <p>Then each word was represented as a xed-size (of length 50) list of vectors,
each of which representing an individual character. The word being represented
was centered in its representation and, because of the length of the representation
of each word being xed, the neighbouring characters, on the left and on the
right, whether belonging to other words or not (in the case of spaces or
nonword tokens) were used as padding.</p>
      <p>To each vector representing an individual character a ag was added, the
token ag whose value speci ed if the character was part of the word being
represented or the padding (its name comes from the fact that its value is 1 if
the character is part of the token being represented).</p>
      <sec id="sec-2-1">
        <title>As an example, let us consider the following sentence:</title>
        <p>Lorem ipsum dolor sit amet, consectetur
adipisci elit, sed do eiusmod tempor
incidunt ut labore et dolore magna
aliqua.</p>
      </sec>
      <sec id="sec-2-2">
        <title>And let's assume the length of the representation of each word to be 14.</title>
        <p>The 5th word (amet) has a length of 4 and will
therefore need 14 4 = 10 characters of padding: 10=2 = 5
on the left and 10=2 = 5 on the right.</p>
        <p>It will, therefore, be represented as:</p>
        <p>sit amet, con</p>
      </sec>
      <sec id="sec-2-3">
        <title>The underlining means that the token ag for the</title>
        <p>marked characters has value 1.</p>
        <p>The whole text has 19 words and will thus be
represented as 19 lists of 14 vectors: each list representing
a word (and its neighbouring characters) and each
vector representing an individual character.</p>
        <p>Because the representations of each token already encodes the neighboring
characters as well there is no need to represent non-word tokens as individual
elements.</p>
        <p>This results in a representation in which each element corresponds to a word
and is a xed-size list of vectors of binary ags, where each individual vector
represents a character.</p>
        <p>Because the representations of each token already encoded the neighboring
characters as well there was no need to consider anything other than a word as
a token.</p>
        <p>Based on this representation and on networkA (which, given a short text
as input, represented as a list of vectors of ags, outputs an individual vector
representation of such text), a model to solve the task could be built.</p>
        <p>networkA was used to convert the representation of each word into an
individual vector: in the model presented by the Aspie96 team for the subtask A of
NEGES, networkA convolves trough the text, word by word.</p>
        <p>The result is a list of as many xed-size vectors as there are words in the
original text and such that each one of them represents the corresponding word
in the original text, with its context provided by the neighbouring characters,
thus taking into account the di erent meanings a word could have depending on
its context.</p>
        <p>The representation obtained in this way is the input to the following layers
of the neural network: a recurrent layer. All outputs of the recurrent layer, each
of which being a vector, are considered (also as many as the words in the text),
not just the last one. The role of the additional recurrent layer is to consider
each word not one by one, but according to its context, provided by the other
words, reading the text fully.</p>
        <p>A simple classi er (a simple neural network, using the softmax function as
output) is then applied to each such vectors individually to get, for each word
(corresponding, in position, to the vector), its classi cation in one of the three
classes.</p>
        <p>The resulting model is represented in Figure 2.</p>
        <p>It must be noted that in the presented model PoS-tags and lemmas are not
unused.</p>
        <p>In order to reduce the amount of false positives, a copy of the model was used
to distinguish, simply, between words that were not part of negation cues and
words that were (whether as the rst word or not), resulting in two copies of the
model, trained separately (one for classi cation among all the three considered
classes and one for classi cation among the rst and the conjunction of the other
two). In the nal classi cation, words were marked as not part of negation cues
if that was the result of either one of the copies of the model, otherwise one of
the remaining two classes was selected.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Results</title>
      <p>
        Models have been evaluated by the task organizers using precision, recall and
F1score, using the same script developed for [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Only word tokens are considered
and:
{ A true positive requires all parts of a negation cue to be correctly classi ed
(essentially, a negation cue is correctly classi ed if all and only the tokens it
contains are classi ed as part of the same negation).
{ To avoid penalizing partial matches more than missed matches, they are
counted as false negatives, but not false positives.
      </p>
      <p>The average precision, recall, and F1-score for the Aspie96 team were,
respectively, 0.1880, 0.2834 and 0.2258. As a comparison, the best average precision,
recall and F1-score were, respectively, 0.9182 (NLP UNED team), 0.7940 (CLiC
team) and 0.8409 (CLiC team).</p>
      <p>The results do suggest the unsuitability of the model for the task.</p>
      <p>The reasons for this needs to be investigated. The copy of the model which
simply classi es words as part of a negation cue or not, although a seemingly
unnecessary source of complexity (as the problem is of classi cation among three
classes, which is dealt with by the other copy of the model) is not the source of
the problem as it causes very few false negatives on its own. In a test, it resulted
in 9480 true negatives, 365 false positives (which can be dealt with by the main
model, which classi es among all three classes), 560 true positives and just 54
false negatives. As its purpose was simply to prevent at least some false positives,
without causing too many false negatives, this was its intended behavior.</p>
      <p>Another apparent cause of the poor results could seem to be the main
assumption of the model: that three classes are enough to solve the problem, although
it not being a problem of classi cation by itself. However, it must be noted that
in the training set only 26 words could not possibly have been properly handled
by the system as a result of this assumption: in every other case, no negation
cue contained, between its starting and ending word, any part of other negation
cues.</p>
      <p>The root of the problem must, therefore, be the structure of the model itself,
perhaps inapplicable due to the very high unbalance of the classes.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Discussion</title>
      <p>The poor results obtained with the proposed model suggest the unsuitability
of the approach for the task at hand, while the good results obtained by other
teams suggest much room for improvement.</p>
      <p>The main disadvantages of the proposed model are, arguably, its most basic
characteristics: it only works on what can be strictly and automatically learned
from the text contained in the documents, but a lot more could be used, such as,
to name just a few, PoS-tags and lemmas, which were provided in the datasets,
pre-computed word-level features, such as word embeddings and common
knowledge about negations and the Spanish language.</p>
      <p>
        An extremely similar model was employed in the FACT (Factuality Analysis
and Classi cation Task) task for factuality classi cation, also part of IberLEF
2019 as described in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], resulting in much better results: the task consisted in
classifying marked words, within a text, among three di erent categories. The
Aspie96 team ranked 2nd (out of a total of 5 teams), with results very close to
that of the rst ranking team.
      </p>
      <p>
        This means this kind of model is indeed suitable for the classi cation of
words within a text, while the results got in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] show, in general, the suitability
of character-level models for natural language processing.
      </p>
      <p>Further research may include the creation of a unique structure of networkA,
without alterations or multiple versions, to be applied to di erent tasks,
resulting in a more stable neural network, and the exact same structure, based on
networkA, to be used for both the FACT task and the NEGES task, in order to
allow for a better comparison between the results.</p>
      <p>Any persistent di erence in the result will be due, strictly, to the di erences
between the tasks.</p>
      <p>The main di erence is that the subtask A of NEGES isn't of classi cation of
speci c words within a text but, rather, it requires the recognition of elements,
some of which composed of multiple, even non contiguous, words. It is very
possible that, although this can be indeed converted to a task of classi cation
of every word within the text (which already is di erent than only trying to
classify some, already marked, ones), the two kinds of task are not to be seen as
similar ones. Alternatively, the problem might have been caused by the highly
unbalanced classes: the negative class is much bigger than the other ones, as it
includes every word in the text which isn't part of a negation cue. In FACT, the
thee classes had 2918, 1171 and 255 words each: most words did not belong to
any class and they were not to be classi ed.</p>
      <p>
        The main point of the model presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] was to be able to work across
di erent tasks, in di erent languages (originally English and Italian) and any
model based upon it should inherit such properties.
      </p>
      <p>It is, therefore, needed to create a more general model than the one used
in the FACT task, able to work properly in di erent tasks. Because the results
obtained in this paper are so poor, the NEGES dataset can still be used for this
purpose, with the aim of producing a model still able to work on FACT, but
also on NEGES.</p>
      <p>Whether this is possible will require more research and a more stable
structure for networkA, upon which to build a model for classi cation of words within
a text.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Cho</surname>
            , K., van Merrienboer,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gulcehre</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bahdanau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bougares</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schwenk</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bengio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>Learning Phrase Representations using RNN Encoder{Decoder for Statistical Machine Translation</article-title>
          .
          <source>In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          . pp.
          <volume>1724</volume>
          {
          <fpage>1734</fpage>
          . Association for Computational Linguistics, Doha, Qatar (Oct
          <year>2014</year>
          ), https://doi. org/10.3115/v1/
          <fpage>D14</fpage>
          -1179
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Cignarella</surname>
            ,
            <given-names>A.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frenda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Basile</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosco</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patti</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , et al.:
          <article-title>Overview of the EVALITA 2018 task on irony detection in italian tweets (IronITA)</article-title>
          .
          <source>In: Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA'18)</source>
          . pp.
          <volume>26</volume>
          {
          <issue>34</issue>
          (
          <year>2018</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2263</volume>
          /paper005.pdf
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Gers</surname>
            ,
            <given-names>F.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cummins</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Learning to Forget: Continual Prediction with LSTM</article-title>
          .
          <source>Neural Computation</source>
          <volume>12</volume>
          (
          <issue>10</issue>
          ),
          <volume>2451</volume>
          {
          <fpage>2471</fpage>
          (
          <year>2000</year>
          ), https://doi.org/ 10.1162/089976600300015015
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Giudice</surname>
          </string-name>
          , V.:
          <article-title>Aspie96 at IronITA (EVALITA</article-title>
          <year>2018</year>
          )
          <article-title>: Irony Detection in Italian Tweets with Character-Level Convolutional RNN</article-title>
          .
          <source>In: Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA'18)</source>
          . pp.
          <volume>160</volume>
          {
          <issue>165</issue>
          (
          <year>2018</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2263</volume>
          /paper026.pdf
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Giudice</surname>
          </string-name>
          , V.:
          <article-title>Aspie96 at FACT</article-title>
          (IberLEF
          <year>2019</year>
          )
          <article-title>: Factuality Classi cation in Spanish Texts with Character-Level Convolutional RNN and Tokenization</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Giudice</surname>
          </string-name>
          , V.:
          <article-title>Aspie96 at HAHA</article-title>
          (IberLEF
          <year>2019</year>
          )
          <article-title>: Humor Detection in Spanish Tweets with Character-Level Convolutional RNN</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Hochreiter</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidhuber</surname>
            ,
            <given-names>J.: Long</given-names>
          </string-name>
          <string-name>
            <surname>Short-Term Memory</surname>
          </string-name>
          .
          <source>Neural Computation</source>
          <volume>9</volume>
          (
          <issue>8</issue>
          ),
          <volume>1735</volume>
          {
          <fpage>1780</fpage>
          (
          <year>1997</year>
          ), https://doi.org/10.1162/neco.
          <year>1997</year>
          .
          <volume>9</volume>
          .8.
          <fpage>1735</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Jimenez-Zafra</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cruz</surname>
            <given-names>D</given-names>
          </string-name>
          az,
          <string-name>
            <given-names>N.P.</given-names>
            ,
            <surname>Morante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Mart</surname>
          </string-name>
          n-Valdivia, M.T.:
          <article-title>NEGES 2019 Task: Negation in Spanish</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Jimenez-Zafra</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>D</given-names>
            <surname>az</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.P.C.</given-names>
            ,
            <surname>Morante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Mart</surname>
          </string-name>
          n-Valdivia, M.T.: NEGES 2018: Workshop on Negation in Spanish.
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>62</volume>
          ,
          <issue>21</issue>
          {
          <fpage>28</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Jimenez-Zafra</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taule</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            n-Valdivia,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <article-title>Uren~a-</article-title>
          <string-name>
            <surname>Lopez</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>SFU ReviewSP-NEG: a Spanish corpus annotated with negation for sentiment analysis. A typology of negation patterns</article-title>
          .
          <source>Language Resources and Evaluation</source>
          <volume>52</volume>
          (
          <issue>2</issue>
          ),
          <volume>533</volume>
          {
          <fpage>569</fpage>
          (
          <year>2018</year>
          ), https://doi.org/10.1007/s10579-017-9391-x
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Morante</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blanco</surname>
          </string-name>
          , E.: *
          <article-title>SEM 2012 Shared Task: Resolving the Scope and Focus of Negation</article-title>
          .
          <source>In: Proceedings of the First Joint Conference on Lexical and Computational Semantics - Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation</source>
          . pp.
          <volume>265</volume>
          {
          <fpage>274</fpage>
          . SemEval '12,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computational Linguistics, Stroudsburg, PA, USA (
          <year>2012</year>
          ), https://aclweb.org/anthology/papers/ S/S12/S12-1035/
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>