<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>at PoliticIT-EVALITA2023: Evaluating Transformer Model for Detecting Political Ideology in Italian Texts</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ronghao Pan</string-name>
          <email>ronghao.pan@um.es</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ángela Almela</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francisco García-Sánchez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Facultad de Informática, Universidad de Murcia, Campus de Espinardo</institution>
          ,
          <addr-line>30100</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Facultad de Letras, Universidad de Murcia, Campus de La Merced</institution>
          ,
          <addr-line>30001, Murcia</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Workshop Proce dings</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>cessing and Speech Tools for Italian</institution>
          ,
          <addr-line>Sep 7 - 8, Parma, IT</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper describes the participation of the UMUTeam in the PoliticIT shared task organized at EVALITA 2023. It is an automatic document classification task on clusters of texts, which consists of extracting self-assigned gender as a demographic trait, and ideology as a psychographic trait through a set of texts written in Italian by several authors sharing these traits. For this task, we used the fine-tuning approach of a pre-trained transformer-based masked language model for Italian called dbmdz/bert-base-italian-cased to carry out the identification of diferent features. After several submissions for these tasks, our team ranked sixth out of 7 participants, with an average F1 score of 70.426% of all classification models. However, our binary political ideology classification model obtained the fourth-best result with an F1 score of 86.63%. Natural Language Processing, Transformers, Politic ideology detection, Large Language Model, Multiclass classification [1]. The relationship between personality traits and po- Italian by several authors sharing these traits. This task can guide our daily decisions, just like other psycho- by researchers at Google to address the language translaEVALITA 2023: 8th Evaluation Campaign of Natural Language ProCEUR</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Political ideology is considered to be a psychographic
trait that can be used to understand individual and
social behavior, including moral and ethical values, as well
as inherent attitudes, appraisals, biases, and prejudices
litical ideology was demonstrated in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], in which the
authors collected data from 21 countries and concluded
that political ideology is related to the big five
personality traits and that the results of this relationship vary
across countries, especially as a function of the level of
prosperity.
      </p>
      <p>
        Political ideology has a great influence on society and
graphic traits such as personality. However, these
decisions are made both consciously and unconsciously,
as the social and cultural environment influences our
ideology. In general, however, people tend to be more
reluctant to follow advice and directions from politicians
who do not coincide with their ideology [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In some
extreme cases, people may be strongly biased in favor
of one political party and, at the same time, strongly
sions and put people’s lives at risk by ignoring specific
recommendations of the authorities.
nEvelop-O
      </p>
      <p>
        Thus, the aim of PoliticIT at EVALITA 2023 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] is to
extract information about users’ political ideology through
text clusters in Italian. For this purpose, the task is
divided into several objectives such as extracting
selfassigned gender as a demographic trait, and ideology as
a psychographic trait through a set of texts written in
builds on a previous task called PoliticES presented at
      </p>
      <sec id="sec-1-1">
        <title>IberLEF 2022 [5] where the dataset was an extension of</title>
        <p>
          the PoliCorpus 2020 dataset [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ].
        </p>
        <p>
          Transformer models are a neural network architecture
used in various fields of NLP and other machine learning
domains. The idea of a Transformer network originated
from a paper called Attention is all you need [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], developed
tion problem. Unlike recurrent neural networks (RNNs)
and convolutional neural networks (CNNs), which were
widely used in NLP before the advent of Transformers,
this model relies on attention mechanisms to process
sequences more eficiently and capture long-range
relationships between words in a sentence or the order of words
in a sequence [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Pretrained models based on
Transformers are language models generated through massive
learn language patterns and features from the training
data. This enables them to capture semantic, syntactic,
and contextual information, developing a deep
understanding of language in general. For example, BERT
(Bidirectional Encoder Representations from
Transformers) is a Transformer-based model that achieved
remarkable accuracy and advanced the state of the art in various
disagree with others, which may lead to irrational deci- training on large amounts of unlabeled text, aiming to
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Natural Language Processing (NLP) tasks [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. Another
advantage of pre-trained models is the ability to add
specific layers to adapt them to particular tasks. In other demographic and psychographic features of the cluster
words, the process of taking a pre-trained model on vast is shown in Table 1.
amounts of data and continuing the training on a smaller
aadnvdatnatsakg-espoefcfinifice-dtuatnaisnegt iiss tkhnaot witnalalos wfinse-lteuvnerinaggi.nTghtehe Gender Trait FMeamleale 84T18r08aining 31T13e85st 16T12o23ta8l
ignegnearnadl alapnpglyuianggeitktnooswpleecdificgetaasckqsuwiriethd rdeulartiinvgelpyrsemtraaliln- Ideology binary LRMeiogftdhetrate left 755257088 221404858 977680836
amTohuinstpsaopfetrrapirneisnegndtsattah.e UMUTeam’s participation in Ideology multi-class LRMeiogftdhetrate right 141346172 15151040 242896582
this shared task, based on the fine-tuning of a pre-trained
Italian Transformer model to detect the gender and politi- Table 1
cal ideology of users through their texts written in Italian The distribution of demographic and psychographic traits of
(document-level classification). The rest of the paper is the clusters.
organized as follows. Section 2 presents the task and
dataset provided. Section 3 describes the methodology of
our proposed system for addressing the task. Section 4 3. Methodology
shows the results obtained. Finally, Section 5 concludes
the paper with some findings and possible future work.
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>This task involves identifying the gender (demographic</title>
        <p>
          trait) and political ideology (psychographic trait) of users
2. Task description in a given set of texts (document-level classification).
The task addresses gender traits as a binary classification
The shared task PoliticIT 2023, organized at EVALITA [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], problem and political ideology as both a binary and a
is a document classification problem that aims to extract multi-class classification problem. The approach used to
self-assigned gender as a demographic trait, and politi- solve the challenge of classifying both demographic and
cal ideology as a psychographic trait from a given text psychographic features at the document level consists
cluster [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. The organizers propose political ideology as in creating a phrase-level classification model by
finea binary classification problem and a multi-class classifi- tuning a pre-trained Italian model based on BERT [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]
cation problem. The organizers created text clusters by called dbmdz/bert-base-italian-cased [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. The system
mixing some of these extracted tweets to avoid ethical architecture is depicted in Figure 1 and the pipeline used
and privacy concerns about author profiling on Twitter. to participate in this task can be described as follows.
Consequently, all clusters are composed of texts written First, the dataset has been processed, and the emoji in the
by diferent users sharing all the features under evalu- text have been transformed into text through the emoji
ation. The users of the dataset are labeled with their library1. Second, the training set has been divided into
gender (male, female), and political spectrum on two two parts: training and validation. Third, classification
axes: binary (left, right) and multi-class (left, moderate models are created for each feature using the approach
left, moderate right, right). Regarding the tweets col- explained above. Fourth, having the classification models
lected from each user, the organizers discarded retweets at the sentence level, two strategies have been evaluated
and tweets that contain headlines from news sites and to identify the demographic and psychographic traits of
removed tweets written in languages other than Italian. the users (at document level): (1) mode, which consists
Moreover, they anonymized them by replacing all men- in predicting each user’s tweet individually and selecting
tions of the politicians and other Twitter accounts men- the most repeated label among the results obtained with
tions with @user. Furthermore, other entities, such as the classifier, and (2) highest probability, which selects
political party references, are also replaced with @politi- the label with the highest probability. Next, some more
cal_party token. The tweets that belong to each cluster details about the preprocessing and fine-tuning stages
are selected favoring diversity, including texts from dif- are provided.
ferent dates and topics. For this shared task, the dataset
is divided into training and test sets (80%-20%). For these 3.1. Dataset preprocessing
diferent classification problems, we have used the same
approach, which consists in fine-tuning a pre-trained Tweets from the user have the same demographic and
Italian model based on Transformer for diferent demo- psychographic features, so to carry out the task of
idengraphic and psychographic trait classification tasks at tifying these features, we created a sentence-level
classithe tweet level. Thus, the training set consists of 103,840 ifcation model for each feature with all tweets from all
tweets from 1,298 clusters and the test set consists of 453 users as the training set. The distribution of demographic
users with a total of 36,240 tweets. The distribution of
3.2. Fine-tuning approach
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>We utilized the fine-tuning approach of a transformer</title>
        <p>
          based masked language model for Italian called
dbmdz/bert-base-italian-cased2 to carry out the
identification of diferent features. Based on the BERT base model,
dbmdz/bert-base-italian-cased has been pre-trained
using a recent Wikipedia dump and various texts from the
OPUS corpora collection [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. The final training
corpus has a size of 13GB and 2,050,057,573 tokens. The
ifne-tuning process consists of adapting and adding a
classification layer to the model to perform the training
of the full model. In this way, the model takes advantage
of the pre-trained linguistic knowledge of
dbmdz/bertbase-italian-cased and adapts it specifically for a
particular classification task, which can significantly improve
performance on that task. To do this, we have used
        </p>
      </sec>
      <sec id="sec-1-4">
        <title>2https://github.com/dbmdz/berts</title>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>4. Results</title>
      <sec id="sec-2-1">
        <title>This section describes the systems presented by our team in each run and the overall results obtained in this shared task. It should be noted that each participating team could submit ten runs.</title>
        <p>We submitted a total of 2 runs for this task. The results
and a brief description of each are shown in Table 3. The
ifrst run is based on running the diferent classification
models on the user texts and then using the “mode”
strategy, which consists in selecting the most frequent label
obtained in the set of user texts for each feature. This
run obtained an average F1 score of 0.70426. The second
run has the same structure as the first one but with a
diferent post-processing strategy; in this case, the “high
probability” strategy has been used, which consists of
making a decision based on the highest probability of all
the predictions obtained on the user texts. In this case, it
performed worse than with the “mode” strategy, with an
average F1 score of 0.49232.</p>
        <p>To perform the error analysis and check in which case
the models give wrong predictions, a normalized
confusion matrix with truth labels has been used, which
consists of a table showing the distribution of the
predictions of a model with respect to the truth label of the</p>
      </sec>
      <sec id="sec-2-2">
        <title>3https://huggingface.co/docs/transformers/index</title>
        <p>Ideology
binary
M-R
5. Conclusion
data. The confusion matrix of the system using the mode
strategy is shown in Figure 2. It can be observed that our
model tends to confuse the female gender with the male This paper summarizes the participation of UMUTeam in
gender and right-wing with left-wing political ideology, the PoliticIT shared task (EVALITA 2023). We achieved
and this may be due to the fact that there are fewer ex- a 6/7 on the mean of all F1 scores (70.42%) for the
demoamples in the training set (see Table 2), which makes the graphic and psychographic feature identification
modmodel predict the wrong case. In the multi-class classifi- els. For this, we used the fine-tuning approach of a
precation model of political ideology in our model, it fails trained transformer-based masked language model for
mainly in the left-wing prediction, which tends to predict Italian called dbmdz/bert-base-italian-cased4 to carry out
moderate left with a probability of 58%, and the moderate the identification of diferent features.
right-wing prediction, which tends to predict moderate As future work, we are planning to improve our
left with a probability of 74.51%. pipeline using an expanded transformer-based masked</p>
        <p>Figure 3 shows the confusion matrices of the system language model with political speech, i.e., extend a
with the highest probability strategy. It can be observed Masked Language Model (MLM) model with political
that this strategy tends to confuse right and left ideology text and later fine-tuning this model for detecting
politin the identification of binary ideology, so in Figure 3b ical ideology. In addition, we are planning to fine-tune
it is shown that there are 78.05% cases that have been other pre-trained Italian models to see if they improve
mispredicted. As for the identification of multiclass ide- the dbmdz/bert-base-italian-cased performance.
ology, the model performs better than the mode strategy
in identifying left and moderate right with a probability Acknowledgments
of 40% and 52.94%, respectively. However, it tends to
misidentify moderate left and right ideology poorly, with This work is part of the research projects
hit probabilities of 9.45% and 45.45% and a diference of LaTe4PoliticES (PID2022-138099OB-I00) funded
89.86% and 40.91% with respect to the mode strategy (see by MCIN/AEI/10.13039/501100011033 and the
EuroFigure 2c and 3c). pean Fund for Regional Development (FEDER)-a</p>
        <p>The oficial leaderboard for this task is depicted in Ta- way to make Europe and LaTe4PSP
(PID2019ble 4. We achieved the sixth position in the ranking with 107652RB-I00/AEI/ 10.13039/501100011033) funded
an average F1 score of 0.7042. However, our binary politi- by MCIN/AEI/10.13039/501100011033. This work is
cal ideology classification model obtained the fourth-best also part of the research projects AIInFunds
(PDC2021result with an F1 score of 0.8663.
(b) Ideology binary</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Verhulst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. J.</given-names>
            <surname>Eaves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Hatemi</surname>
          </string-name>
          ,
          <article-title>Correlation not causation: The relationship between personality traits and political ideologies</article-title>
          ,
          <source>American Journal of Political Science</source>
          <volume>56</volume>
          (
          <year>2012</year>
          )
          <fpage>34</fpage>
          -
          <lpage>51</lpage>
          . doi:https://doi. org/10.1111/j.1540-
          <fpage>5907</fpage>
          .
          <year>2011</year>
          .
          <volume>00568</volume>
          .x.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fatke</surname>
          </string-name>
          ,
          <article-title>Personality traits and political ideology: A first global assessment</article-title>
          ,
          <source>Political Psychology</source>
          <volume>38</volume>
          (
          <year>2017</year>
          )
          <fpage>881</fpage>
          -
          <lpage>899</lpage>
          . doi:https://doi.org/10. 1111/pops.12347.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>García-Díaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Colomo-Palacios</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. ValenciaGarcía</surname>
          </string-name>
          ,
          <article-title>Psychographic traits identification based on political ideology: An author analysis study on spanish politicians' tweets posted in 2020, Future Generation Computer Systems 130 (</article-title>
          <year>2022</year>
          )
          <fpage>59</fpage>
          -
          <lpage>74</lpage>
          . doi:https://doi.org/10.1016/j.future.
          <year>2021</year>
          .
          <volume>12</volume>
          .011.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4] Russo, Daniel and Jiménez Zafra, Salud María and
          <string-name>
            <given-names>García</given-names>
            <surname>Díaz</surname>
          </string-name>
          , José Antonio and Caselli, Tommaso and Guerini, Marco and
          <string-name>
            <given-names>Ureña</given-names>
            <surname>López</surname>
          </string-name>
          , Luis Alfonso, and
          <article-title>Valencia García, Rafael, Overview of PoliticIT2023@EVALITA: Political Ideology Detection in Italian Texts, 8th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA</article-title>
          <year>2023</year>
          )
          <article-title>(</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>García-Díaz</surname>
          </string-name>
          , José Antonio AND Jiménez Zafra,
          <string-name>
            <surname>Salud</surname>
            <given-names>M.</given-names>
          </string-name>
          AND
          <string-name>
            <given-names>Martín</given-names>
            <surname>Valdivia</surname>
          </string-name>
          ,
          <article-title>María Teresa</article-title>
          AND
          <string-name>
            <surname>García-Sánchez</surname>
          </string-name>
          , Francisco AND Ureña López,
          <article-title>Luis Alfonso</article-title>
          AND
          <string-name>
            <given-names>Valencia</given-names>
            <surname>García</surname>
          </string-name>
          , Rafael, Overview of politices 2022:
          <article-title>Spanish author profiling for political ideology</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>69</volume>
          (
          <issue>2022a</issue>
          )
          <fpage>265</fpage>
          -
          <lpage>272</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>CoRR abs/1706</source>
          .03762 (
          <year>2017</year>
          ). URL: http://arxiv.org/abs/ 1706.03762. arXiv:
          <volume>1706</volume>
          .
          <fpage>03762</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <article-title>A survey of transformers</article-title>
          ,
          <source>AI</source>
          Open 3
          <article-title>(</article-title>
          <year>2022</year>
          )
          <fpage>111</fpage>
          -
          <lpage>132</lpage>
          . URL: https://www.sciencedirect.com/science/ article/pii/S2666651022000146. doi:https: //doi.org/10.1016/j.aiopen.
          <year>2022</year>
          .
          <volume>10</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          ,
          <article-title>BERT: pre-training of deep bidirectional transformers for language understanding</article-title>
          , CoRR abs/
          <year>1810</year>
          .04805 (
          <year>2018</year>
          ). URL: http://arxiv.org/abs/
          <year>1810</year>
          .04805. arXiv:
          <year>1810</year>
          .04805.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Menini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sprugnoli</surname>
          </string-name>
          , G. Venturi,
          <year>Evalita 2023</year>
          :
          <article-title>Overview of the 8th evaluation campaign of natural language processing and speech tools for italian, in: Proceedings of the Eighth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian</article-title>
          .
          <source>Final Workshop (EVALITA</source>
          <year>2023</year>
          ), CEUR.org, Parma, Italy,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Schweter</surname>
          </string-name>
          ,
          <source>Italian bert and electra models</source>
          ,
          <year>2020</year>
          . URL: https://doi.org/10.5281/zenodo.4263142. doi:
          <volume>10</volume>
          .5281/zenodo.4263142.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Tiedemann</surname>
          </string-name>
          ,
          <article-title>Parallel data, tools and interfaces in OPUS</article-title>
          ,
          <source>in: Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)</source>
          ,
          <source>European Language Resources Association (ELRA)</source>
          , Istanbul, Turkey,
          <year>2012</year>
          , pp.
          <fpage>2214</fpage>
          -
          <lpage>2218</lpage>
          . URL: http://www.lrec-conf. org/proceedings/lrec2012/pdf/463_Paper.pdf.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>