<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>DLSI-Volvam at RepLab 2013: Polarity Classi cation on Twitter Data</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alejandro Mosquera</string-name>
          <email>amosquera@dlsi.ua.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Javi Fernandez</string-name>
          <email>javifm@dlsi.ua.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jose M. Gomez</string-name>
          <email>jmgomez@dlsi.ua.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Patricio Mart nez-Barco</string-name>
          <email>patricio@dlsi.ua.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paloma Moreda</string-name>
          <email>paloma@dlsi.ua.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Software and Computing Systems, University of Alicante</institution>
          ,
          <addr-line>Alicante</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Volvam Analytics Ltd.</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2013</year>
      </pub-date>
      <abstract>
        <p>This paper describes our participation in the pro ling (polarity classi cation) task of the RepLab 2013 workshop. This task is focused on determining whether a given text from Twitter contains a positive or a negative statement related to the reputation of a given entity. We cover three di erent approaches, one unsupervised and two unsupervised. They combine machine learning and lexicon-based techniques with an emotional concept model. These approaches were properly adapted to English and Spanish depending on the resources available for each language. We obtained promising results in the overall evaluations, reaching a F-score of 34% and a sensitivity of 40% in the best cases. The reasonable level of performance compared to other methods encourages us to continue working on the improvement of the proposed approaches.</p>
      </abstract>
      <kwd-group>
        <kwd>online reputation</kwd>
        <kwd>sentiment analysis</kwd>
        <kwd>polarity classi cation</kwd>
        <kwd>text normalisation</kwd>
        <kwd>machine learning</kwd>
        <kwd>lexicon</kwd>
        <kwd>emotion concepts</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Nowadays, social media applications have allowed users to have an active
participation through their comments and opinions, stated about a wide range of
topics and services. This subjective information is very valuable because it
determines the reputation of public gures and companies in the marketplace of
personal and business relationships. However, it is not feasible to monitor this
information in a manual way, because the amount of information is very large and
is updated very quickly. Therefore, automatising this process is essential. The
eld of on-line reputation management (ORM) studies automated ways to track
the opinion of the users about qualitative or quantitative aspects dealing with
several challenges such as subjectivity, textual noise or domain heterogeneity.
This task is very complex, as it deals with important issues in opinion mining,
sentiment analysis, bias detection, named entity discrimination, topic modelling
and other aspects which are not trivial in natural language processing [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        RepLab 2013 is a competitive evaluation exercise for ORM systems, focusing
in monitoring the reputation of entities (companies, organisations, celebrities,
etc.) on Twitter 3 [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In this article we focus our participation in the pro ling
(polarity classi cation) task. The goal of this task is to decide if the tweet
content has positive or negative implications for the reputation of a given entity.
Polarity for reputation is substantially di erent from standard sentiment
analysis, because the goal is to nd what implications a piece of information regardless
of whether the content is opinionated or not. In addition, negative sentiments do
not always imply negative polarity for reputation and vice versa (e.g. "R.I.P.
Whitney Houston. We'll miss you" has a negative associated sentiment but
a positive implication for the reputation of Whitney Houston).
      </p>
      <p>We propose three di erent approaches to face this task. Our rst approach is
unsupervised and makes use of fuzzy lexicons in order to catch informal variants
that are common in Twitter texts. The second one is supervised and extends the
rst approach with machine learning (ML) techniques and an emotion concept
model. Finally, the last one also employs ML but this time following the
bagof-concepts (BoC) approach common-sense a ective knowledge. Each approach
has been adapted properly to English and Spanish, depending on the resources
available for each language.</p>
      <p>The remainder of the paper is structured as follows. In Section 2, we describe
the approaches proposed, as well as the tools and resources used in the
implementation. The experiments performed and their evaluation and discussion are
provided in Section 3. Finally, Section 4 concludes the paper, and outlines the
future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Polarity Classi cation</title>
      <p>The following sections explain our three di erent approaches submitted to the
polarity classi cation subtask of RepLab 2013. We focus on the techniques, tools
and resources employed for the design and implementation of each approach.
Their main goal is to determine whether a tweet has positive, negative or
neutral impact on the reputation of a given entity. These approaches were properly
adapted each approach to English and Spanish but, as not all the required
resources are available for both languages, our adaptations are no symmetric.</p>
      <p>The preprocessing module, common to all our approaches, is explained in
Section 2.1. The rst one is unsupervised and it is described in Section 2.2. In
Section 2.3 and Section 2.4 we explain the supervised approaches.
2.1</p>
      <p>Preprocessing
Tweets are preprocessed before applying any model by following these common
steps, for both English and Spanish languages:
1) Cleansing. All the words with non-standard characters are removed.</p>
      <sec id="sec-2-1">
        <title>3 http://www.twitter.com</title>
        <p>
          2) Tokenisation. The text is rst split into sentences using regular expressions.
3) Lemmatisation. For each sentence we extract the lemmas of its words. In
English texts this sentence extraction is made using the MBLEM4
lemmatiser, that combines a memory-based ML algorithm with a dictionary lookup.
Freeling5 [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] was the tool selected for extracting lemmas from sentences in
Spanish. In order to obtain accurate lemmas a custom dictionary was created
to replace common out-of-vocabulary (OOV) words, such as misspellings and
informal lexical variants with their canonical version (e.g. lol ! laugh; q !
que).
4) URL removal. Each URL is substituted with a place-holder tag ( URL ).
5) Twitter hashtag splitting. Hashtags can contain sentiment-related information
so we split them into independent words using a cost function based on word
frequencies (e.g. #WeHateVF ! we hate VF).
6) Emoticon normalisation. We follow the same approach found in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] in order to
replace emoticons with their textual equivalence (e.g. xDDD ! I am happy).
7) Named-entity detection. Locations, people and temporal expressions are
detected using a maximum entropy tagger, which was trained with the CONLL
dataset [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
2.2
        </p>
        <p>
          Volvam Polarity 1: Unsupervised Lexicon-Based Model
Our rst submitted run makes use of the fuzzy lexicons of SentiStrength6 [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], in
order to detect the most common informal terminology used in Twitter. These
lexicons indicate not only if a term represents a positive or a negative opinion,
but also an intensity score. The terms in these lexicons are English terms, so
we manually translated them to obtain the corresponding Spanish lexicons. In
addition, we extended the lexicons to allow the detection of modi ers that can
invert (negation), increase or decrease the polarity score of each term. The
polarity score of a text T is calculated by adding the lexicon scores of each term t
inside that text:
polarityScore(T ) =
modif iersScore(t)
(1)
X lexiconScore(t)
t2T
where lexiconScore(t) is the polarity score for the term t (range [ 4; 4]) and
modif iersScore(t) is the score given to the term t by the modi ers of term t
(range [ 1; 1]). Finally, the polarity of that text is assigned depending on the
polarity score obtained, using the following formula:
        </p>
        <p>8 positive
polarity(T ) = &lt; neutral
: negative
if polarityScore(T ) &gt; 0
if polarityScore(T ) = 0
if polarityScore(T ) &lt; 0</p>
      </sec>
      <sec id="sec-2-2">
        <title>4 http://ilk.uvt.nl/mbma/ 5 http://nlp.lsi.upc.edu/freeling/ 6 http://sentistrength.wlv.ac.uk</title>
        <p>Volvam Polarity 2: Supervised Model combining</p>
        <p>
          Lexicons and Concepts
Our second submitted run uses a supervised ML model. The features used for
this model are generated using the unsupervised model from Section 2.2:
{ TotalPolarity. Total polarity obtained from the unsupervised model.
{ AvgSubjectivity. Average subjectivity values extracted from the v2.0 polarity
dataset [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
{ CountPositive. Number of positive words in the text.
{ CountNegative. Number of negative words in the text.
{ CountNeutral. Number of neutral words in the text.
{ SentenceTokens. Tokens/sentence ratio.
{ TotalSubjectivity. Total subjectivity value,
{ CountSubjective. Number of words with subjectivity &gt; 0.
{ CountProfanity. Number of profanity words.
{ CountQuestions. Number of sentences that are questions.
{ CountNonQuestions. Number of sentences that are not questions.
{ CountNegated. Number of negated sentences.
{ CountModPlus. Number of augmentative modi ers.
{ CountModMinus. Number of diminutive modi ers.
        </p>
        <p>
          In addition, for the English texts, we added emotion-based features from
SenticNet [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. SenticNet consists on a lexicon containing four concept dimensions
for each term: pleasantness, attention, sensitivity and aptitude. These concepts
and its scores are used as additional features to build the ML model.
        </p>
        <p>
          As the training dataset provided for this task was highly unbalanced, in
terms of language and polarity labels, we followed a cross-corpus approach. As
training set for the English language we used the sentiment analysis training
dataset from SemEval 2013 [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] and, for the Spanish language, the TASS 2012
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] training set. The classi cation model was built using the Random Forests
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] ensemble classi er on a subset of 6000 tweets.
2.4
        </p>
        <p>Volvam Polarity 3: Supervised Model using Bag-of-Concepts
In our last submission we created di erent models for each language. For
English, a Random Forest classi er was built using concept count vectors extracted
from the provided RepLab training data. We followed the BoC approach using
SenticNet common-sense a ective knowledge. As we did not nd an equivalent
emotion-based model for Spanish, we followed a simpler bag-of-words approach
using the lemmas of the terms in the text.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Evaluation</title>
      <p>
        Our system was evaluated in terms of accuracy and F(R, S)[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], where R
(reliability ) is the precision of relations predicted by the system with respect to
actual relations in the gold standard and S (sensitivity ) is the recall of relations
predicted by the system with respect to the actual relations in the gold standard.
A comparative of the obtained results are detailed in Table 1 (only the best run
for each one of the other teams is displayed for informative purposes).
      </p>
      <p>
        In general, the results obtained by all participants are not as high as the
state-of-the-art results in polarity classi cation. This happens because polarity
for reputation is a more complex task [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In addition, the datasets provided are
highly unbalanced so the accuracies are no signi cant [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ]. This fact can be
seen in the results of the trivial ALLPOSITIVE run, where all texts in the training
set were classi ed as positive, which achieves an accuracy of 57%.
      </p>
      <p>Our best ranked approach is the second one with a F-score of 34%, very near
to the 38% obtained by the best approach of all participants. Our rst approach
reached the best sensitivity of all runs, with a 40%.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>In this paper we described our participation in the pro ling (polarity
classication) task of the RepLab 2013 workshop. We covered three di erent
approaches, one unsupervised and two unsupervised, combining machine
learning and lexicon-based techniques with an emotional concept model. These
approaches were properly adapted to English and Spanish depending on the
resources available for each language. We obtained promising results in the overall
evaluations, reaching a F-score of 34% and a sensitivity of 40% in the best cases.
The reasonable level of performance compared to other methods encourages us
to continue working on the improvement of the proposed approaches.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Balahur</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The challenge of processing opinions in online contents in the social web era</article-title>
          .
          <source>In: Proceedings of the Language Engineering for Online Reputation Management Worksop</source>
          ,
          <string-name>
            <surname>LREC</surname>
          </string-name>
          <year>2012</year>
          .
          <article-title>(</article-title>
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Amig</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , Carrillo de Albornoz, J.,
          <string-name>
            <surname>Chugur</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corujo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martn</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meij</surname>
            , E., de Rijke,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spina</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          : Overview of replab 2013:
          <article-title>Evaluating online reputation monitoring systems</article-title>
          .
          <source>In: Fourth International Conference of the CLEF initiative, CLEF</source>
          <year>2013</year>
          ,
          <article-title>Valencia, Spain</article-title>
          . Proceedings. Springer LNCS (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Padr</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stanilovsky</surname>
          </string-name>
          , E.:
          <article-title>Freeling 3.0: Towards wider multilinguality</article-title>
          . In Chair),
          <string-name>
            <given-names>N.C.C.</given-names>
            ,
            <surname>Choukri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Declerck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Dogan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.U.</given-names>
            ,
            <surname>Maegaard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ,
            <surname>Mariani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Odijk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Piperidis</surname>
          </string-name>
          , S., eds.
          <source>: Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)</source>
          , Istanbul, Turkey, European Language Resources Association (ELRA) (may
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Mosquera</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lloret</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moreda</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Towards facilitating the accessibility of web 2.0 texts through text normalisation</article-title>
          .
          <source>In: Proceedings of the LREC workshop: Natural Language Processing for Improving Textual Accessibility (NLP4ITA)</source>
          ; Istanbul, Turkey. (
          <year>2012</year>
          )
          <volume>9</volume>
          {
          <fpage>14</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>Tjong</given-names>
            <surname>Kim Sang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.F.</given-names>
            ,
            <surname>De Meulder</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>Introduction to the conll-2003 shared task: Language-independent named entity recognition</article-title>
          .
          <source>In: Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume</source>
          <volume>4</volume>
          , Association for Computational Linguistics (
          <year>2003</year>
          )
          <volume>142</volume>
          {
          <fpage>147</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Thelwall</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buckley</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paltoglou</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cai</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kappas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Sentiment strength detection in short informal text</article-title>
          .
          <source>Journal of the American Society for Information Science and Technology</source>
          <volume>61</volume>
          (
          <issue>12</issue>
          ) (
          <year>2010</year>
          )
          <volume>2544</volume>
          {
          <fpage>2558</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Pang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts</article-title>
          .
          <source>In: Proceedings of the ACL</source>
          . (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Cambria</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Havasi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hussain</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Senticnet 2: A semantic and a ective resource for opinion mining and sentiment analysis</article-title>
          . In Youngblood,
          <string-name>
            <given-names>G.M.</given-names>
            ,
            <surname>McCarthy</surname>
          </string-name>
          , P.M., eds.: FLAIRS Conference, AAAI Press (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. Wilson,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Kozareva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ,
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Rosenthal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Ritter</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          :
          <article-title>Semeval-2013 task 2: Sentiment analysis in twitter</article-title>
          .
          <source>In: Proceedings of the International Workshop on Semantic Evaluation, SemEval</source>
          . Volume
          <volume>13</volume>
          . (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Villena-Roman</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lana-Serrano</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mart</surname>
            nez-Camara,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cristobal</surname>
            ,
            <given-names>J.C.G.</given-names>
          </string-name>
          : Tass - workshop
          <source>on sentiment analysis at sepln. Procesamiento del Lenguaje Natural</source>
          <volume>50</volume>
          (
          <year>2013</year>
          )
          <volume>37</volume>
          {
          <fpage>44</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Breiman</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Random forests</article-title>
          .
          <source>Mach. Learn</source>
          .
          <volume>45</volume>
          (
          <issue>1</issue>
          ) (
          <year>October 2001</year>
          )
          <volume>5</volume>
          {
          <fpage>32</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Amigo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gonzalo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Verdejo</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Reliability and sensitivity: Generic evaluation measures for document organization tasks</article-title>
          .
          <source>In: Tech. rep., UNED</source>
          . (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>X.:</given-names>
          </string-name>
          <article-title>A re-examination of text categorization methods</article-title>
          .
          <source>In: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval</source>
          ,
          <source>ACM</source>
          (
          <year>1999</year>
          )
          <volume>42</volume>
          {
          <fpage>49</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Boldrini</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , Fernandez Mart nez, J.,
          <string-name>
            <surname>Gomez</surname>
            <given-names>Soriano</given-names>
          </string-name>
          ,
          <string-name>
            <surname>J.M.</surname>
          </string-name>
          , Mart nez Barco,
          <string-name>
            <surname>P.</surname>
          </string-name>
          , et al.:
          <article-title>Machine learning techniques for automatic opinion detection in nontraditional textual genres</article-title>
          . (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>