<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Unlocking Sentiments: Exploring the Power of NLP Transformers in Review Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Javier Alonso-Mencía</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University Carlos III of Madrid (UC3M)</institution>
          ,
          <addr-line>Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <abstract>
        <p>Sentiment analysis, a highly coveted area within Natural Language Processing, holds significant business potential by enabling the analysis of opinions across various textual forms. The article presents the participation of the Javier Alonso-Mencía team in the REST-MEX@IberLef 2023 Sentiment Analysis track. The primary objective was to predict the polarity of tourists' opinions as well as identifying the country of origin and the type of tourist attraction in reviews written in Spanish.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Sentiment analysis, a crucial aspect of NLP, involves discerning opinions and their polarity. It
aids organizations in eficiently extracting opinions and identifying their source country and the
type of place being reviewed [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The tourism sector greatly benefits from sentiment analysis in
Spanish-speaking countries, particularly in countries like Colombia, Mexico, and Cuba, where
tourists actively share their experiences on social media platforms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Mexico heavily relies on tourism for economic growth and employment [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However, the
COVID-19 pandemic has adversely afected global tourism, posing challenges for developing
economies. To recover, it is essential to enhance productivity in sectors like tourism. Leveraging
platforms such as Tripadvisor, with their wealth of text-based data, enables us to understand
tourist preferences [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ].
      </p>
      <p>
        Recent advancements in NLP, particularly Transformers, have revolutionized sentiment
analysis by comprehending context and delivering superior results [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This paper proposes
using Transformers for sentiment analysis in the REST-MEX@IberLef 2023 shared task, focusing
on tourist satisfaction and classifying the type of place (restaurant, hotel, or tourist attraction) [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
By combining sentiment analysis with the identification of source countries, this competition
aims to provide comprehensive insights into the sentiment towards various establishments in
Colombia, Mexico, and Cuba.
      </p>
      <p>
        Sentiment analysis plays a vital role in understanding opinions [
        <xref ref-type="bibr" rid="ref8">8, 9</xref>
        ]. By leveraging
advancements in NLP techniques like Transformers, this competition seeks to achieve state-of-the-art
sentiment analysis results and extract valuable insights from vast text data [10, 11, 12].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <sec id="sec-2-1">
        <title>2.1. Data analysis</title>
        <p>The collection of labeled data given in the competition consists of a total of 251,702 comments
obtained from tourists who shared their opinion on TripAdvisor between 2002 and 2022. Each
comment contains the following fields: title, review, attraction (type of place), polarity (1-5) and
country (Colombia, Cuba, Mexico).</p>
        <p>First, it was required to identify if the training dataset was well-balanced, that is, all classes
have similar number of instances. An unbalanced dataset means that the models will not be
able to train properly on the least represented classes, which might hinder the inference as
there is not enough insight to classify them.</p>
        <p>The analysis of polarity distribution on Figure 1 shows that polarity is highly biased towards
the most positive polarity (value 5), representing 60% of the polarity values, meanwhile the
lowest polarity (value 1) has a representation of about 2%.</p>
        <p>In terms of the type of place (attraction) and country, it is more evenly distributed with
"Attractive" and "Mexico" holding almost 50% respectively of the values and the other half
divided between the other classes.</p>
        <p>A preprocessing step was performed to remove instances which include empty titles, empty
reviews or reviews with more than 5,000 characters. In total 411 long instances were removed.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Data Balancing</title>
        <p>To mitigate the significant class imbalance present in the original dataset, a data balancing
strategy was implemented to improve the representation of minority classes. The initial
distribution exhibited a substantial disparity, with Polarity class 5 dominating the majority of
instances (157,095), followed by classes 4, 3, 2, and 1, which progressively decreased in count.
The objective was to achieve a more equitable distribution while ensuring that no class exceeded
50,000 samples after the balancing process.</p>
        <p>To accomplish this, a custom oversampling function was developed. The function took the
dataset as input, along with the column indicating the sentiment label. Firstly, it computed
the count of examples per label to quantify the existing class imbalance. Then, the maximum
number of samples to add per class was determined.</p>
        <p>The oversampling procedure involved iterating over each label individually. For each label,
the function randomly selected examples from the corresponding class’s dataframe, considering
the number of instances required to reach the maximum sample count. The chosen examples
were then appended to the original instances of the respective label. This process was repeated
for every class, ensuring that each label had a proportionate representation in the balanced
dataset.</p>
        <p>After oversampling, the resulting dataset was shufled to introduce randomness and prevent
any order-based bias. The final dataset exhibited a more balanced distribution compared to the
original data. Table 2 shows the new data distribution after performing the custom oversampling.</p>
        <p>After careful consideration, alternative techniques such as data augmentation through
summarization were evaluated for addressing the class imbalance. However, based on previous
ifndings [ 13], it was determined that this approach did not yield significant improvements in
the performance of the sentiment analysis model. Therefore, the decision was made to exclude
data augmentation through summarization from the current approach.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Create dataset</title>
        <p>In the dataset preparation phase, the columns of the dataset were processed to ensure
compatibility with the sentiment analysis task. First, a code snippet was executed to convert the "Polarity"
column values from a range of 1 to 5 to a more standardized range of 0 to 4. Similarly, the "Type"
column values were transformed, assigning the value of 0 to "Hotel," 1 to "Restaurant," and 2 to
"Attractive." Furthermore, the "Country" column values were modified, mapping "Mexico" to 0,
"Cuba" to 1, and "Colombia" to 2.</p>
        <p>To facilitate the analysis, a new column called "Title_Review" was created by concatenating
the "Title" and "Review" columns. This combined column would capture both the concise titles
and the comprehensive reviews, providing a comprehensive representation of the text data.</p>
        <p>Subsequently, the dataset was shufled to ensure the randomness of instance ordering. For
the experimental setup, 10% of the instances were set aside as a test set, while the remaining
90% constituted the training set. The resulting dataset was organized into two subsets: the
"train" subset, comprising 287,936 instances, and the "test" subset, comprising 31,993 instances.</p>
        <p>The prepared dataset [14], consisting of columns such as "Title_Review", "Polarity", "Country",
and "Type", was now ready for further analysis and model training.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Training</title>
        <sec id="sec-2-4-1">
          <title>2.4.1. Transformers</title>
          <p>In recent years, significant progress has been made in leveraging Transformers for various
natural language processing (NLP) applications. This progress has been facilitated by the
availability of platforms like Huggingface [15], which ofer convenient means to utilize and
train pretrained models. By employing pretrained models, transfer learning can be harnessed,
allowing the model to benefit from pre-existing knowledge and thereby reducing training time
and resource requirements while enhancing performance [16]. Huggingface models play a
pivotal role in simplifying this process as they ofer ready-made implementations for diverse
NLP tasks.</p>
          <p>Two diferent transformers approaches based on RoBERTa [ 17] were followed. RoBERTa
is an optimized version of the BERT model, focusing on the encoder part of the transformer
architecture [18, 19]. It employs masked language modeling during training, masking around
15% of the tokens. This makes RoBERTa well-suited for tasks that require sentence-level
understanding and informed decision-making.</p>
          <p>• PlanTL-GOB-ES/roberta-base-bne [20]: This model is a variant of RoBERTa specifically
trained with a Spanish vocabulary. It utilizes the same tokenizer as the original RoBERTa
model for consistent tokenization.
• cardifnlp/twitter-xlm-roberta-base [21]: This model is a variant of RoBERTa trained
on 198M multilingual tweets from Twitter. It is a multilingual model.</p>
        </sec>
        <sec id="sec-2-4-2">
          <title>2.4.2. Models trained</title>
          <p>The diferent models trained are presented in this section. The problem to solve was divided
into 3 categories depending on which label was required to be predicted. Therefore, at least one
model per attribute (Polarity, Attraction and Country) was trained.</p>
        </sec>
        <sec id="sec-2-4-3">
          <title>Polarity</title>
        </sec>
        <sec id="sec-2-4-4">
          <title>Model 1 - Polarity</title>
        </sec>
        <sec id="sec-2-4-5">
          <title>Model 2 - Polarity</title>
          <p>• cardifnlp/twitter-xlm-roberta-base
• 7 epochs
• lr: 2e-5
• batch size: 16</p>
        </sec>
        <sec id="sec-2-4-6">
          <title>Model 3 - Polarity</title>
          <p>• PlanTL-GOB-ES/roberta-base-bne
• 2 epochs
• lr: 2.5e-5
• batch size: 16</p>
        </sec>
        <sec id="sec-2-4-7">
          <title>Model 4 - Polarity</title>
          <p>• PlanTL-GOB-ES/roberta-base-bne
• 3 epochs
• lr: 2.5e-5
• batch size: 16</p>
        </sec>
        <sec id="sec-2-4-8">
          <title>Model 5 - Polarity</title>
          <p>• cardifnlp/twitter-xlm-roberta-base
• 8 epochs
• lr: 1e-5
• batch size: 16</p>
        </sec>
        <sec id="sec-2-4-9">
          <title>Type of attraction</title>
        </sec>
        <sec id="sec-2-4-10">
          <title>Model 1 - Attraction</title>
          <p>• cardifnlp/twitter-xlm-roberta-base
• 4 epochs
• lr: 1e-5
• batch size: 16</p>
        </sec>
        <sec id="sec-2-4-11">
          <title>Country</title>
        </sec>
        <sec id="sec-2-4-12">
          <title>Model 1 - Country</title>
          <p>• cardifnlp/twitter-xlm-roberta-base
• 4 epochs
• lr: 1e-5
• batch size: 16
Model 2 - Country</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and discussion</title>
      <sec id="sec-3-1">
        <title>3.1. Evaluation</title>
        <p>The evaluation section assesses the performance and efectiveness of the developed models in
addressing the research objectives. This section presents the evaluation metrics used to measure
the models’ performance.</p>
        <p>The evaluation results in Table 3 indicate the performance of diferent models trained for
each class. For the "Polarity" classification task, Model 1.2 achieved the highest macro F1 score
of 0.8461, outperforming other models. In the "Type of attraction" classification, Model 2.1
demonstrated the highest macro F1 score of 0.9941, showcasing superior performance. For the
"Country" classification, Model 3.2 achieved the highest macro F1 score of 0.9566, indicating
its efectiveness. These findings underscore the significance of selecting appropriate models
tailored to specific classification tasks, ultimately enhancing the overall performance of the
NLP system.</p>
        <p>Model
1.1
1.2
1.3
1.4
1.5</p>
        <p>Polarity
Macro F1
score
0.8217
0.8461
0.8186
0.8508
0.6070</p>
        <p>Val. loss</p>
        <p>Model</p>
        <p>Due to the fact that it was possible to submit any number of models, it was decided to send
all combinations of models in order keep the models with better results. Table 4 indicates the
10 submissions created for the competition.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Competition results</title>
        <p>The sentiment analysis competition showcased impressive results, with the 6th submission, out
of a total of 10 submissions, emerging as the second-best performing entry in the competition
(see Table 5). The team’s submission comprised three models tailored for polarity, attraction,
and country classes respectively.</p>
        <p>The evaluation metrics demonstrated the efectiveness of the team’s approach in sentiment
analysis. The Sentiment Track Score achieved a noteworthy value of 0.766, highlighting the
team’s competence in accurately predicting sentiment. The macro F1 scores for each class
were remarkably high, with polarity achieving 0.602, attraction achieving 0.988, and country
achieving 0.936.</p>
        <p>The extraction of the polarity class proved to be more challenging compared to the country
or type of attraction classes. Several factors contributed to this dificulty. Firstly, the inherent
subjectivity of sentiment analysis poses challenges in accurately capturing the nuanced polarity
of textual data. Additionally, the presence of ambiguous or sarcastic language further complicates
the task of polarity classification. Furthermore, the variability and diversity of expressions
used to convey sentiment within the polarity class create additional complexity. These factors
collectively contributed to the increased dificulty in extracting polarity compared to the country
or type of attraction, highlighting the intricacies involved in accurately discerning sentiment
from textual data.</p>
        <p>Rank
1st
2nd
3th</p>
        <p>Run
LKE-IIMAS</p>
        <p>Team_RUN_2
javier_alonso-Team
_sentiment_sub6</p>
        <p>IIMAS-UNAMTeam_resultados</p>
        <p>Sentiment
Track Score</p>
        <p>Macro F1
(Polarity)</p>
        <p>Macro F1
(Type)</p>
        <p>Macro F1
(Country)
0,779
0,766
0,750
0,621
0,602
0,593
0,990
0,988
0,979
0,942
0,935
0,902</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>In this paper, the application of transformer-based models for sentiment analysis in the context
of an NLP competition was explored. Two models based on RoBERTa,
PlanTL-GOB-ES/robertabase-bne and cardifnlp/twitter-xlm-roberta-base , were employed to solve the task. The results
obtained from the evaluation indicate that strong performance was demonstrated this approach
across multiple evaluation metrics.</p>
      <p>It was observed that the 6th submission, consisting of three models for polarity, type of
attraction, and country classes, achieved the second-best result in the competition.</p>
      <p>Furthermore, it was noted that the extraction of the polarity class presented greater
challenges compared to the country and type of attraction classes. The increased complexity in
accurately discerning polarity from textual data was attributed to the inherent subjectivity of
sentiment analysis, combined with the presence of ambiguous language and diverse expressions.</p>
      <p>As future work to enhance the performance of sentiment analysis models, further exploration
of preprocessing techniques and data balancing methods can be considered. The application of
more advanced preprocessing techniques, such as lemmatization, stemming, or part-of-speech
tagging, could help improve the quality of the input data by reducing noise and capturing more
meaningful features.</p>
      <p>The experimentation can be found in a Github repository [22].
González, Overview of rest-mex at iberlef 2021: Recommendation system for text mexican
tourism, Procesamiento del Lenguaje Natural 67 (2021).
[9] M. Á. Álvarez-Carmona, Á. Díaz-Pacheco, R. Aranda, A. Y. Rodríguez-González, D.
FajardoDelgado, R. Guerrero-Rodríguez, L. Bustio-Martínez, Overview of rest-mex at iberlef 2022:
Recommendation system, sentiment analysis and covid semaphore prediction for mexican
tourist texts, Procesamiento del Lenguaje Natural 69 (2022).
[10] B. Pang, L. Lee, Opinion mining and sentiment analysis, in: Foundations and Trends in</p>
      <p>Information Retrieval, volume 2, 2008, pp. 1–135.
[11] M. A. Álvarez Carmona, R. Aranda, A. Y. Rodríguez-Gonzalez, D. Fajardo-Delgado, M. G.</p>
      <p>Sánchez, H. Pérez-Espinosa, J. Martínez-Miranda, R. Guerrero-Rodríguez, L.
BustioMartínez, Ángel Díaz-Pacheco, Natural language processing applied to tourism
research: A systematic review and future research directions, Journal of King Saud
University - Computer and Information Sciences 34 (2022) 10125–10144. URL: https:
//www.sciencedirect.com/science/article/pii/S1319157822003615. doi:https://doi.org/
10.1016/j.jksuci.2022.10.010.
[12] A. Diaz-Pacheco, M. A. Álvarez Carmona, R. Guerrero-Rodríguez, L. A. C. Chávez,
A. Y. Rodríguez-González, J. P. Ramírez-Silva, R. Aranda, Artificial intelligence
methods to support the research of destination image in tourism. a systematic
review, Journal of Experimental &amp; Theoretical Artificial Intelligence 0 (2022) 1–
31. URL: https://doi.org/10.1080/0952813X.2022.2153276. doi:10.1080/0952813X.2022.
2153276. arXiv:https://doi.org/10.1080/0952813X.2022.2153276.
[13] M. P. Enríquez, J. A. Mencía, I. Segura-Bedmar, Transformers approach for sentiment
analysis: Classification of mexican tourists reviews from tripadvisor (2022).
[14] Javier Alonso, rest23_sentiment_data_v3_oversampling (revision 4c3329a), 2023.</p>
      <p>URL: https://huggingface.co/datasets/javilonso/rest23_sentiment_data_v3_oversampling.
doi:10.57967/hf/0675.
[15] Hugging face – the ai community building the future., 2023. URL: https://huggingface.co/.
[16] F. Zhuang, Z. Qi, K. Duan, D. Xi, Y. Zhu, H. Zhu, H. Xiong, Q. He, A comprehensive survey
on transfer learning, Proceedings of the IEEE 109 (2020) 43–76.
[17] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer,
V. Stoyanov, Roberta: A robustly optimized bert pretraining approach, arXiv preprint
arXiv:1907.11692 (2019).
[18] J. Devlin, M. Chang, K. Lee, K. Toutanova, BERT: pre-training of deep bidirectional
transformers for language understanding, CoRR abs/1810.04805 (2018). URL: http://arxiv.
org/abs/1810.04805. arXiv:1810.04805.
[19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I.
Polosukhin, Attention is all you need, Advances in neural information processing systems 30
(2017).
[20] A. G. Fandiño, J. A. Estapé, M. Pàmies, J. L. Palao, J. S. Ocampo, C. P. Carrino, C. A. Oller,
C. R. Penagos, A. G. Agirre, M. Villegas, Maria: Spanish language models, Procesamiento
del Lenguaje Natural 68 (2022). URL: https://upcommons.upc.edu/handle/2117/367156#
.YyMTB4X9A-0.mendeley. doi:10.26342/2022-68-3.
[21] F. Barbieri, L. E. Anke, J. Camacho-Collados, Xlm-t: Multilingual language models in
twitter for sentiment analysis and beyond, 2022. arXiv:2104.12250.
[22] J. Alonso, Restmex23_nlp, https://github.com/javilonso/RestMex23_NLP, 2023. Accessed:
May 23, 2023.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Sentiment Analysis</article-title>
          and
          <string-name>
            <given-names>Opinion</given-names>
            <surname>Mining</surname>
          </string-name>
          , Morgan &amp; Claypool Publishers,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Microblogging after a major disaster in china: A case study of the 2010 yushu earthquake</article-title>
          ,
          <source>in: Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>S. de Turismo</surname>
          </string-name>
          (SECTUR),
          <source>Anuario estadístico de turismo en méxico</source>
          <year>2019</year>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          ,
          <article-title>Tourism forecasting models: A review of literature</article-title>
          ,
          <source>Tourism Management Perspectives</source>
          <volume>19</volume>
          (
          <year>2016</year>
          )
          <fpage>39</fpage>
          -
          <lpage>55</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>López-Monroy</surname>
          </string-name>
          ,
          <article-title>Studying online travel reviews related to tourist attractions using nlp methods: the case of guanajuato, mexico</article-title>
          , Current Issues in Tourism (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:https://doi.org/10.1080/ 13683500.
          <year>2021</year>
          .
          <volume>2007227</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          )
          <fpage>5998</fpage>
          -
          <lpage>6008</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          , L. BustioMartínez, V.
          <string-name>
            <surname>Muñis-Sánchez</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>Pastor-López</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Sánchez-Vega</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2023: Research on sentiment analysis task for mexican tourist texts</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>71</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Arce-Cárdenas</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fajardo-Delgado</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>GuerreroRodríguez</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>López-Monroy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Martínez-Miranda</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Pérez-Espinosa</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Rodríguez-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>