<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>NEGES 2019 Task: Negation in Spanish</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Salud Mar a Jimenez-Zafra</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Noa Patricia Cruz D</string-name>
          <email>contact@noacruz.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>r Mor</string-name>
          <email>r.morantevallejo@vu.nl</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mar a-Teresa Mart n-Valdivia</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CLTL Lab, Computational Linguistics, VU University Amsterdam</institution>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Centro de Excelencia de Inteligencia Arti cial</institution>
          ,
          <addr-line>Bankia</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>SINAI, Computer Science Department, CEATIC, Universidad de Jaen</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>329</fpage>
      <lpage>341</lpage>
      <abstract>
        <p>This paper presents the 2019 edition of the NEGES task, Negation in Spanish, held on September 24 as part of the evaluation forum IberLEF in the 35th International Conference of the Spanish Society for Natural Language Processing. In this edition, two sub-tasks were proposed: Sub-task A: \Negation cues detection" and Sub-task B: \Role of negation in sentiment analysis". The dataset used for both sub-tasks was the SFU ReviewSP-NEG corpus. About 13 teams showed interest in the task and 5 teams nally submitted results.</p>
      </abstract>
      <kwd-group>
        <kwd>NEGES 2019 negation negation processing cue detection sentiment analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Negation is a complex linguistic phenomenon that has been widely studied from
a theoretical perspective [
        <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
        ], and less from an applied point of view.
However, interest in the computational treatment of this phenomenon is of growing
interest, because it is relevant for a wide range of Natural Language Processing
applications such as sentiment analysis or information retrieval, where it is
crucial to know when the meaning of a part of the text changes due to the presence
of negation. In fact, in recent years, several challenges and shared tasks have
focused on negation processing: the BioNLP'09 Shared Task 3 [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], the
NeSpNLP 2010 Workshop: Negation and Speculation in Natural Language Processing
[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], the CoNLL-2010 shared task [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], the i2b2 NLP Challenge [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], the *SEM
2012 Shared Task [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], the ShARe/CLEF eHealth Evaluation Lab 2014 Task 2
[
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], the ExProM Workshop: Extra-Propositional Aspects of Meaning in
Computational Linguistics [
        <xref ref-type="bibr" rid="ref26 ref4 ref6">26, 6, 4</xref>
        ] and the SemBEaR Workshop: Computational
Semantics Beyond Events and Roles [
        <xref ref-type="bibr" rid="ref1 ref5">5, 1</xref>
        ].
      </p>
      <p>
        However, most of the research on negation has been done for English.
Therefore, the aim of NEGES task4 is to advance the study of this phenomenon in
Spanish, the second most widely spoken language in the world and the third
most widely used on the Internet. The 2018 edition consisted of three tasks
related to di erent aspects of negation [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]: Task 1 on reaching an agreement on
the guidelines to follow for the annotation of negation in Spanish, Task 2 on
identifying negation cues, and Task 3 on evaluating the role of negation in
sentiment analysis. A total of 4 teams participated in the workshop, 2 for developing
annotation guidelines and 2 for negation cues detection. Task 3 had no
participants. In this edition, the objective is to continue bringing together the scienti c
community that is working on negation to discuss how it is being addressed,
what are the main problems encountered, as well as sharing resources and tools
aimed at processing negation in Spanish.
      </p>
      <p>The rest of this paper is organized as follows. The proposed sub-tasks are
described in Section 2, and the data used is detailed in Section 3. Evaluation
measures are introduced in Section 4. Participating systems and their results are
summarized in Section 5. Finally, Section 6 concludes the paper.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Task description</title>
      <p>
        In the 2019 edition of NEGES task, Negation in Spanish, two sub-tasks were
proposed as a continuation of the tasks carried out in NEGES 2018 [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ].
{ Sub-task A: \Negation cues detection"
{ Sub-task B: \Role of negation in sentiment analysis"
      </p>
      <p>The following is a description of each sub-task.
2.1</p>
      <sec id="sec-2-1">
        <title>Sub-task A: Negation cues detection</title>
        <p>Sub-task A of NEGES 2019 had the aim to promote the development and
evaluation of systems for identifying negation cues in Spanish. Negation cues could be
simple, if they were expressed by a single token (e.g., \no" [no/not], \sin"
[without] ), continuous, if they were composed of a sequence of two or more contiguous
tokens (e.g., \ni siquiera" [not even], \sin ningun" [without any] ), or
discontinuous, if they consisted of a sequence of two or more non-contiguous tokens (e.g.,
\no...apenas" [not...hardly], \no...nada" [not...nothing] ). For example, in
sentence (1) the systems had to identify four negation cues: i) the discontinuous
cue \No...nada" [Not...nothing], ii) the simple cue \no" [no/not], iii) the simple
cue \no" [no/not] again, and iv) the continuous cue \ni siquiera" [not even].
(1)</p>
        <p>No1 tengo nada1 en contra del servicio del hotel, pero no2 pienso volver,
no3 me ha gustado, ni siquiera4 las vistas son buenas.</p>
        <p>I have nothing against the service of the hotel, but I do not plan to return,
I did not like it, not even the views are good.</p>
        <sec id="sec-2-1-1">
          <title>4 http://www.sepln.org/workshops/neges2019/</title>
          <p>
            Participants received a set of training and development data consisting of
reviews of movies, books and products from the SFU ReviewSP-NEG corpus[
            <xref ref-type="bibr" rid="ref19">19</xref>
            ]
to build their systems during the development phase. At a later stage, a set of
tests were made available for evaluation. Finally, the participant's submissions
were evaluated against the gold standard annotations. It should be noted that
the data sets used in this sub-task were manually annotated with negation cues
by domain experts, following well-de ned annotation guidelines [
            <xref ref-type="bibr" rid="ref19 ref23">19, 23</xref>
            ].
2.2
          </p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Sub-task B: Role of negation in sentiment analysis</title>
        <p>
          Sub-task B of NEGES 2019 proposed to evaluate the impact of accurate negation
detection in sentiment analysis. In this task, participants had to develop a system
that used the negation information contained in a corpus of reviews of movies,
books and products, the SFU ReviewSP-NEG corpus [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ], to improve the task
of polarity classi cation. They had to classify each review as positive or negative
using an heuristic that incorporated negation processing. For example, systems
should classify a review such as (2) as negative using the negation information
provided by the organization, a sample of which is shown in Figure 1.
(2)
        </p>
        <p>El 307 es muy bonito, pero no os lo recomiendo. Por un fallo electrico te
puedes matar en la carretera.</p>
        <p>The 307 is very nice, but I don't recommend it. An electrical failure can kill
you on the road.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Data</title>
      <p>
        The SFU ReviewSP-NEG corpus5 [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] was the collection of documents provided
for training and testing the systems in Sub-task A and Sub-task B 6. This corpus
is an extension of the Spanish part of the SFU Review corpus [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] and it could
be considered the counterpart of the SFU Review Corpus with negation and
speculation annotations [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
      </p>
      <p>
        The Spanish SFU Review corpus [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] consists of 400 reviews extracted from
the website Ciao.es that belong to 8 di erent domains: cars, hotels, washing
machines, books, cell phones, music, computers, and movies. For each domain there
are 50 positive and 50 negative reviews, de ned as positive or negative based on
the number of stars given by the reviewer (1-2=negative; 4-5=positive; 3-star
review were not included). Later, it was extended to the SFU ReviewSP-NEG
corpus [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] in which each review was automatically annotated at the token level
with ne and coarse PoS-tags and lemmas using Freeling [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], and manually
annotated at the sentence level with negation cues and their corresponding scopes
and events. Moreover, it is the rst Spanish corpus in which it was annotated
how negation a ects the words within its scope, that is, whether there is a change
in the polarity or an increase or decrease of its value. Finally, it is important
to note that the corpus is in XML format and it is freely available for research
purposes.
3.1
      </p>
      <sec id="sec-3-1">
        <title>Datasets Sub-task A</title>
        <p>
          The SFU ReviewSP-NEG corpus [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] was randomly splitted into development,
training and test sets with 33 reviews per domain in training, 7 reviews per
domain in development and 10 reviews per domain in test. The data was converted
to CoNLL format [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] where each line corresponds to a token, each annotation
is provided in a column and empty lines indicate the end of the sentence. The
content of the given columns is:
{ Column 1: domain lename
{ Column 2: sentence number within domain lename
{ Column 3: token number within sentence
{ Column 4: word
{ Column 5: lemma
{ Column 6: part-of-speech
{ Column 7: part-of-speech type
{ Columns 8 to last: if the sentence has no negations, column 8 has a \***"
value and there are no more columns. Else, if the sentence has negations, the
annotation for each negation is provided in three columns. The rst column
contains the word that belongs to the negation cue. The second and third
columns contain \-".
        </p>
        <sec id="sec-3-1-1">
          <title>5 http://sinai.ujaen.es/sfu-review-sp-neg-2/</title>
          <p>6 To download the data in the format provided for Sub-task A and Sub-task B go to
http://www.sepln.org/workshops/neges2019/ or send an email to the organizers</p>
          <p>
            The distribution of reviews and negation cues in the datasets is provided
in Table 1: 264 reviews with 2,511 negation cues for training the systems, 56
reviews with 594 negation cues for the tuning process, and 80 reviews with 836
negation cues for the nal evaluation.
For this sub-task, we provided the SFU ReviewSP-NEG corpus [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ] with the
original format (XML). The meaning of the labels found in the reviews are the
following:
{ &lt;review polarity=\positive/negative"&gt;. It describes the polarity of the
review, which can be \positive" or \negative".
{ &lt;sentence complex=\yes/no"&gt;. This label corresponds to a complete phrase
or fragment thereof in which a negation structure can appear. It has
associated the complex attribute that can take one of the following values:
\yes", if the sentence contains more than one negation structure.
\no", if the sentence only has a negation structure.
{ &lt;neg structure&gt;. This label corresponds to a syntactic structure in which a
negation cue appears. It has 4 possible attributes, two of which (change and
polarity modi er ) are mutually exclusive.
          </p>
          <p>polarity: it presents the semantic orientation of the negation structure
(\positive", \negative" or \neutral").
change: it indicates whether the polarity or meaning of the negation
structure has been completely changed because of the negation (change=
\yes") or not (change=\no").
polarity modi er: it states whether the negation structure contains an
element that nuances its polarity. It can take the value \increment" if
there is an increment in the intensity of the polarity or, on the contrary,
it can take the value \reduction" if there is a reduction of it.
value: it re ects the type of the negation structure, that is, \neg" if
it expresses negation, \contrast" if it indicates contrast or opposition
between terms, \comp" if it expresses a comparison or inequality between
terms or \noneg" if it does not negate despite containing a negation cue.
{ &lt;scope&gt;. This label delimits the part of the negation structure that is within
the scope of negation. It includes both, the negation cue (&lt;negexp&gt;) and
the event (&lt;event&gt;).
{ &lt;negexp&gt;. It contains the word(s) that constitute(s) the negation cue. It
can have associated the attribute discid if negation is represented by
discontinuous words.
{ &lt;event&gt;. It contains the words that are directly a ected by the negation
(usually verbs, nouns or adjectives).</p>
          <p>
            The distribution of reviews in the training, development and test sets is
provided in Table 2, as well as the distribution of the di erent negation structures
per dataset. The total of positive and negative reviews can be seen in the rows
named as + Reviews and - Reviews, respectively.
The evaluation script used to evaluate the systems presented in Sub-task A was
the same as the one used to evaluate the *SEM 2012 Shared Task: \Resolving
the Scope and Focus of Negation" [
            <xref ref-type="bibr" rid="ref24">24</xref>
            ]. It is based on the following criteria:
{ Punctuation tokens are ignored.
{ A True Positive (TP) requires all tokens of the negation element have to be
correctly identi ed.
{ To evaluate cues, partial matches are not counted as False Positive (FP),
only as False Negative (FN). This is to avoid penalizing partial matches
more than missed matches.
          </p>
          <p>The measures used to evaluate the systems were Precision (P), Recall (R) and
F-score (F1). In the proposed evaluation, FN are counted either by the system
not identifying negation cues present in the gold annotations, or by identifying
them partially, i.e., not all tokens have been correctly identi ed or the word
forms are incorrect. FP are counted when the system produces a negation cue not
present in the gold annotations and TP are counted when the system produces
negation cues exactly as they are in the gold annotations.</p>
          <p>For evaluating Sub-task B, the traditional measures used in text
classication were applied: P, R, F1 and Accuracy (Acc). P, R and F1-score were
measured per class and averaged using macro-average method.</p>
          <p>P =</p>
          <p>T P
T P + F P</p>
          <p>R =</p>
          <p>T P</p>
          <p>T P + F N
13 teams showed interested and 5 teams submitted results.</p>
          <p>Sub-task A had 4 participants: Aspie96 from the University of Turin, the
CLiC team from Universitat de Barcelona, the IBI team from Integrative
Biomedical Informatics group of Universitat Pompeu Fabra, and the UNED team from
Universidad Nacional de Eduacion a Distancia (UNED) and Instituto Mixto de
Investigacion-Escuela Nacional de Sanidad (IMIENS). The o cial results by
domain are shown in Table 3, and overall results are presented in Table 4, both
evaluated in terms of P, R and F1. For IBI and UNED teams the domain in
which it was most di cult to detect the negation cues was that of cell phones
reviews, while for Aspie96 and CLiC it was the domain of hotels and books
reviews, respectively. In terms of overall performance, the results of Aspie96 were
quite low compared to the other teams. CLiC, IBI and UNED team obtained
similar precision. However, the CLiC team achieved the highest recall, reaching
the rst rank position.</p>
          <p>
            Aspie96 [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ] presented a model based in a convolutional Recurrent Neural
Network (RNN) previously used for irony detection in Italian tweets [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] at
IronITA shared task [
            <xref ref-type="bibr" rid="ref8">8</xref>
            ]. In order to address the task at NEGES, the system
was modi ed to take tokens and Spanish spelling into account. Each word was
represented using a 50-character window in which non-word tokens were also
considered. The words were then fed into a GRU layer to expand the context.
The GRU layer's output was fed to a classi er that classi ed each word as not
part of a negation cue, the rst word of a negation cue or part of the latest started
negation cue. A similar model was shown to be suitable for the classi cation of
irony [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] and factuality [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ], but for negation it is not. The results of the task
are quite low compared to other competing systems.
          </p>
          <p>
            The CLiC team [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ] developed a system based on the Conditional Random
Field (CRF) algorithm, inspired in the system of Loharja et al. (2018) [
            <xref ref-type="bibr" rid="ref22">22</xref>
            ]
presented in NEGES 2018 [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ], which achieved the best results. They used as
features the word forms and PoS tags of the actual word, the posterior word
and the previous 6 words. They also conducted experiments including two
postprocessing methods: a set of rules and a vocabulary list composed of candidate
cues extracted from an annotated corpus (NewsCom). Neither the rules nor
the list of candidates boost basic CRF's results during the development phase.
Therefore, they presented to the competition the CRF model without
postprocessing, achieving the rst position in the rank.
          </p>
          <p>
            The IBI team [
            <xref ref-type="bibr" rid="ref9">9</xref>
            ] experimented with four supervised learning approaches
(CRF, Random Forest, Support Vector Machine with linear kernel and
XGBoost) using shallow textual, lemma, PoS tags and dependency tree features
to characterize each token. For Random Forest, Support Vector Machine with
linear kernel and XGBoost they also used the same set of features for the three
previous and three posterior tokens in order to model the context of the
token in focus. The highest performance during the development phase was the
one grounded by the CRF approach. Therefore, they chosen it to support their
participation, reaching the third rank position in the competition.
          </p>
          <p>
            The UNED team [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ] participated in the sub-task with a system based on
Deep Learning, which is an evolution of the system presented in the previous
edition of this workshop [
            <xref ref-type="bibr" rid="ref11">11</xref>
            ]. Speci cally, they proposed a BILSTM-based model
using words, PoS tags and characters embedding features, and a one-hot vector
to represent casing information. Moreover, they included in the system a
postprocessing phase in which some rules were used to correct frequent errors made
by the network. The results obtained represent an improvement in relation to
those of the 2018 edition of NEGES [
            <xref ref-type="bibr" rid="ref18">18</xref>
            ] and place them in the second position
this year.
          </p>
          <p>
            Sub-task B had 1 participant: LTG-Oslo from University of Oslo. The
ofcial results per sentiment class (positive and negative) and overall results are
shown in Table 5. The results for the positive class are better than those of the
negative class and, overall, they do not give a strong performance in absolute
numbers, but the proposed approach is very interesting. LTG-Oslo [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ] addressed
the task using a multi-task learning approach where a single model is trained
simultaneously to negation detection and sentiment analysis. Speci cally, shared
lower-layers in a deep Bidirectional Long Short-Term Memory network
(BiLSTM) were used to predict negation, while the higher layers were dedicated to
predicting sentiment at document-level.
This paper presents the description of the 2019 edition of NEGES task, whose
aim is to continue working on advancing the state-of-the-art of negation detection
in Spanish. Exactly, this edition consisted of 2 of the 3 sub-tasks carried out in
the previous edition: Sub-task A: \Negation cues detection" and Sub-task B:
\Role of negation in sentiment analysis", in both using the SFU ReviewSP-NEG
corpus [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ] to train and test the systems presented.
          </p>
          <p>Compared to the previous edition, this year the workshop has attracted more
attention, with more teams interested in participating in it (15 vs. 10). In
addition, despite including one less sub-task, the number of submissions has been
higher: In the 2018 edition of NEGES, a total of 4 teams participated in the
workshop, 2 for developing annotation guidelines and 2 for cues detection. The
task of studying the role of negation in sentiment analysis had no participants.
This year, 5 teams submitted results, 4 for identifying negation cues and 1 for
studying the role of negation in sentiment analysis. The low number of
submissions in the last sub-task may be due to the fact that in order to study the
impact of accurate negation detection in sentiment analysis it is necessary to
determine how to e ciently represent negation, in the case of machine learning
systems, or how to modify the polarity of the words within the scope of negation
in the case of lexicon-based systems.</p>
          <p>Regarding the approaches followed to detect negation cues, it seems that the
teams continue to opt indistinctly for both more traditional machine learning
approaches and deep learning algorithms, con rming that the use of Conditional
Random Fields obtains the best results in this sub-task.</p>
          <p>Concerning the system errors and di culties encountered in the identi cation
of negation cues, we can say the following. Aspie96 reported that the low results
of its system could be due to the fact that only the text of the documents had
been taken into account, without incorporating features such as the lemma and
the PoS tags of the words, which could be of help. In fact, the other teams
used them and obtained good results. The CLiC team reported several types of
errors: errors in identifying negation cues that do not express negation (e.g. \Ya
estaba casi, no (B)?" [It was almost there, wasn't it?] ); not correctly identifying
continuous cues (e.g. \a no ser que" [unless], \a excepcion de" [with the exception
of ], \a falta de" [in the absence of ] ); tagging elements such as \tan" [so], \tanto"
[so much], \muy" [very] or \mucho" [much] in discontinuous cues; and not
detecting discontinuous cues. The IBI team detected that the performance of
the approaches tested drastically decreases when they deal with multi-token
negation cues. The UNED team also found it more di cult to identify
multipleterm negation cues.</p>
          <p>As for the di culties and errors in the evaluation of the role of negation in
sentiment analysis, LTG-Oslo stated that given the fact that the task is
performed at the document level, it is di cult to determine them exactly. However,
it is concluded that the multi-task model (MTL) is better than the single-task
sentiment model (STL) for this sub-task and that the training size and di erent
domains complicate the use of deep neural architectures.</p>
          <p>Future editions of the workshop will also focus on detecting negation in other
domains such as biomedical and studying other components of negation, such as
the scope. Moreover, authors will have to include an error analysis of the results
presented.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgements</title>
      <p>This work has been partially supported by a grant from the Ministerio de
Educacion Cultura y Deporte (MECD - scholarship FPU014/00983), Fondo Europeo
de Desarrollo Regional (FEDER), and REDES project (TIN2015-65136-C2-1-R)
and LIVING-LANG project (RTI2018-094653-B-C21) from the Spanish
Government. RM is supported by the Netherlands Organization for Scienti c Research
(NWO) via the Spinoza-prize awarded to Piek Vossen (SPI 30-673, 2014-2019).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <source>Proceedings of the Workshop on Computational Semantics beyond Events and Roles. Association for Computational Linguistics</source>
          , New Orleans,
          <source>Louisiana (Jun</source>
          <year>2018</year>
          ). https://doi.org/10.18653/v1/
          <fpage>W18</fpage>
          -13, https://www.aclweb.org/anthology/W18-1300
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Barnes</surname>
          </string-name>
          , J.:
          <article-title>LTG-Oslo Hierarchical Multi-task Network: The Importance of Negation for Document-level Sentiment in Spanish</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEURWS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Beltran</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Detection of Negation Cues in Spanish: The CLiC-Neg System</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Blanco</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morante</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saur</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Proceedings of the workshop on extrapropositional aspects of meaning in computational linguistics (exprom)</article-title>
          .
          <source>In: Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics (ExProM)</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Blanco</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morante</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Saur</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Proceedings of the workshop computational semantics beyond events and roles</article-title>
          .
          <source>In: Proceedings of the Workshop Computational Semantics Beyond Events and Roles</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Blanco</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morante</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sporleder</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Proceedings of the second workshop on extra-propositional aspects of meaning in computational semantics (exprom 2015)</article-title>
          .
          <source>In: Proceedings of the Second Workshop on Extra-Propositional Aspects of Meaning in Computational Semantics (ExProM</source>
          <year>2015</year>
          ) (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Buchholz</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marsi</surname>
          </string-name>
          , E.:
          <article-title>CoNLL-X shared task on multilingual dependency parsing</article-title>
          .
          <source>In: Proceedings of the tenth conference on computational natural language learning</source>
          . pp.
          <volume>149</volume>
          {
          <issue>164</issue>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Cignarella</surname>
            ,
            <given-names>A.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frenda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Basile</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bosco</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Patti</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , et al.:
          <article-title>Overview of the evalita 2018 task on irony detection in italian tweets (ironita)</article-title>
          .
          <source>In: Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA'18)</source>
          . pp.
          <volume>26</volume>
          {
          <issue>34</issue>
          (
          <year>2018</year>
          ), http://ceur-ws.org/Vol2263/paper005.pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dom</surname>
            nguez-Mas,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ronzano</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Furlong</surname>
            ,
            <given-names>L.I.</given-names>
          </string-name>
          :
          <article-title>Supervised Learning Approaches to Detect Negation Cues in Spanish Reviews</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEURWS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Fabregat</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duque</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            nez-Romo,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Araujo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Extending a Deep Learning approach for Negation Cues Detection in Spanish</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Fabregat</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            nez-Romo,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Araujo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Deep Learning Approach for Negation Cues Detection in Spanish at NEGES 2018</article-title>
          .
          <source>In: Proceedings of NEGES 2018: Workshop on Negation in Spanish, CEUR Workshop Proceedings</source>
          . vol.
          <volume>2174</volume>
          , pp.
          <volume>43</volume>
          {
          <issue>48</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Farkas</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vincze</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mora</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Csirik</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szarvas</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>The CoNLL-2010 shared task: learning to detect hedges and their scope in natural language text</article-title>
          .
          <source>In: Proceedings of the Fourteenth Conference on Computational Natural Language Learning|Shared Task</source>
          . pp.
          <volume>1</volume>
          {
          <issue>12</issue>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Giudice</surname>
          </string-name>
          , V.:
          <article-title>Aspie96 at IronITA (EVALITA</article-title>
          <year>2018</year>
          )
          <article-title>: Irony Detection in Italian Tweets with Character-Level Convolutional RNN</article-title>
          .
          <source>In: Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA'18)</source>
          . pp.
          <volume>160</volume>
          {
          <issue>165</issue>
          (
          <year>2018</year>
          ), http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2263</volume>
          /paper026.pdf
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Giudice</surname>
          </string-name>
          , V.:
          <article-title>Aspie96 at FACT</article-title>
          (IberLEF
          <year>2019</year>
          )
          <article-title>: Factuality Classi cation in Spanish Texts with Character-Level Convolutional RNN and Tokenization</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Giudice</surname>
          </string-name>
          , V.:
          <article-title>Aspie96 at NEGES</article-title>
          (IberLEF
          <year>2019</year>
          )
          <article-title>: Negation Cues Detection in Spanish with Character-Level Convolutional RNN and Tokenization</article-title>
          .
          <source>In: Proceedings of the Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2019</year>
          ).
          <source>CEUR Workshop Proceedings</source>
          , CEUR-WS, Bilbao, Spain (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Horn</surname>
            ,
            <given-names>L.R.:</given-names>
          </string-name>
          <article-title>A natural history of negation</article-title>
          .
          <source>CSLI Publications</source>
          (
          <year>1989</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Horn</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>The expression of negation</article-title>
          . De Gruyter Mouton, Berlin (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Jimenez-Zafra</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cruz</surname>
            <given-names>D</given-names>
          </string-name>
          az,
          <string-name>
            <given-names>N.P.</given-names>
            ,
            <surname>Morante</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Mart</surname>
          </string-name>
          n-Valdivia, M.T.a.:
          <source>NEGES 2018: Workshop on Negation in Spanish. Procesamiento del Lenguaje Natural (62)</source>
          ,
          <volume>21</volume>
          {
          <fpage>28</fpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Jimenez-Zafra</surname>
            ,
            <given-names>S.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taule</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            n-Valdivia,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <article-title>Uren~a-</article-title>
          <string-name>
            <surname>Lopez</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          :
          <article-title>SFU Review SP-NEG: a Spanish corpus annotated with negation for sentiment analysis. A typology of negation patterns</article-title>
          .
          <source>Language Resources and Evaluation</source>
          <volume>52</volume>
          (
          <issue>2</issue>
          ),
          <volume>533</volume>
          {
          <fpage>569</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ohta</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pyysalo</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kano</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsujii</surname>
          </string-name>
          , J.:
          <article-title>Overview of bionlp'09 shared task on event extraction</article-title>
          .
          <source>In: Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing: Shared Task</source>
          . pp.
          <volume>1</volume>
          {
          <issue>9</issue>
          . Association for Computational Linguistics (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Konstantinova</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>De Sousa</surname>
            ,
            <given-names>S.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>D az</surname>
            ,
            <given-names>N.P.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lopez</surname>
            ,
            <given-names>M.J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taboada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mitkov</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>A review corpus annotated for negation, speculation and their scope</article-title>
          .
          <source>In: Proceedings of LREC 2012</source>
          . pp.
          <volume>3190</volume>
          {
          <issue>3195</issue>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Loharja</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Padro</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turmo</surname>
          </string-name>
          , J.:
          <article-title>Negation Cues Detection Using CRF on Spanish Product Review Text at NEGES 2018</article-title>
          .
          <source>In: Proceedings of NEGES 2018: Workshop on Negation in Spanish, CEUR Workshop Proceedings</source>
          . vol.
          <volume>2174</volume>
          , pp.
          <volume>49</volume>
          {
          <issue>54</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Mart</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taule</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nofre</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Marso</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            n-Valdivia,
            <given-names>M.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>JimenezZafra</surname>
          </string-name>
          , S.M.:
          <article-title>La negacion en espan~ol: analisis y tipolog a de patrones de negacion</article-title>
          .
          <source>Procesamiento del Lenguaje Natural (57)</source>
          ,
          <volume>41</volume>
          {
          <fpage>48</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Morante</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blanco</surname>
          </string-name>
          , E.: *
          <article-title>SEM 2012 shared task: Resolving the scope and focus of negation</article-title>
          .
          <source>In: Proceedings of the First Joint Conference on Lexical and Computational Semantics</source>
          . pp.
          <volume>265</volume>
          {
          <issue>274</issue>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Morante</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sporleder</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Proceedings of the workshop on negation and speculation in natural language processing</article-title>
          .
          <source>In: Proceedings of the Workshop on Negation and Speculation in Natural Language Processing</source>
          . pp.
          <volume>1</volume>
          {
          <issue>109</issue>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Morante</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sporleder</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Proceedings of the workshop on extra-propositional aspects of meaning in computational linguistics</article-title>
          .
          <source>In: Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics</source>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Mowery</surname>
            ,
            <given-names>D.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Velupillai</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>South</surname>
            ,
            <given-names>B.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Christensen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martinez</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kelly</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goeuriot</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elhadad</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pradhan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Savova</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , et al.:
          <article-title>Task 2: Share/clef ehealth evaluation lab 2014</article-title>
          .
          <source>In: Proceedings of CLEF</source>
          <year>2014</year>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Padro</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stanilovsky</surname>
          </string-name>
          , E.:
          <article-title>Freeling 3.0: Towards wider multilinguality</article-title>
          .
          <source>In: Proceedings of LREC 2012</source>
          . Istanbul, Turkey (May
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Taboada</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anthony</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Voll</surname>
          </string-name>
          , K.D.:
          <article-title>Methods for creating semantic orientation dictionaries</article-title>
          .
          <source>In: Proceedings of LREC 2016</source>
          . pp.
          <volume>427</volume>
          {
          <issue>432</issue>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Uzuner</surname>
            ,
            <given-names>O</given-names>
          </string-name>
          .,
          <string-name>
            <surname>South</surname>
            ,
            <given-names>B.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shen</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , DuVall, S.L.:
          <year>2010</year>
          i2b2/
          <article-title>va challenge on concepts, assertions, and relations in clinical text</article-title>
          .
          <source>Journal of the American Medical Informatics Association</source>
          <volume>18</volume>
          (
          <issue>5</issue>
          ),
          <volume>552</volume>
          {
          <fpage>556</fpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>