<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Zaragoza, Spain
* Corresponding author.
$ davale@ciencias.unam.mx (D. A. García-Espinosa)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Multi-Task BERT Architecture for Sentiment Analysis and Classification of Mexican Tourism Reviews in Rest-Mex 2025</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>David Alexis García-Espinosa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luis Eduardo Flores-Luna</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ángel Andrés Moreno-Sánchez</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universidad Nacional Autónoma de México, Facultad de Ciencias, Ciudad de México</institution>
          ,
          <addr-line>Ciudad de México</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This paper presents a comprehensive multi-task learning approach using BERT multilingual architecture for the Rest-Mex 2025 sentiment analysis competition at IberLEF 2025. Our system simultaneously predicts three key attributes from Spanish tourism reviews: sentiment polarity on a 1-5 scale, destination type classification among hotel, restaurant, and attraction categories, and Magical Town identification from 40 possible locations. The proposed architecture leverages bert-base-multilingual-cased as a shared backbone with task-specific classification heads, trained using mixed precision optimization and equal-weighted loss functions. Our approach achieved an Honorable Mention in the oficial competition with a track score of 0.6663, demonstrating competitive performance with F1-scores of 0.5821 for polarity, 0.9735 for destination type, and 0.6200 for Magical Town classification. The results represent 91.85% of the winning solution's performance and a 7.4× improvement over the baseline across all metrics. The evaluation methodology employs the oficial Rest-Mex 2025 weighted scoring scheme where Magical Town identification receives 3 × importance, polarity receives 2× importance, and destination type receives 1× importance, computed as (2×polarity + 1×type + 3×town)/6. Our methodology addresses the unique challenges of Spanish tourism text analysis and provides reproducible results through detailed architectural specifications and training procedures.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Multi-task Learning</kwd>
        <kwd>BERT</kwd>
        <kwd>Sentiment Analysis</kwd>
        <kwd>Tourism</kwd>
        <kwd>Spanish NLP</kwd>
        <kwd>Mexican Magical Towns</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The analysis of sentiment in user-generated tourism content has emerged as a critical research area,
particularly within the context of Spanish-language text processing and the unique challenges posed by
Mexican tourism discourse [1, 2, 3]. The Rest-Mex shared task, which has been a cornerstone of the
Iberian Languages Evaluation Forum (IberLEF) since 2022, specifically addresses these challenges by
providing a focused evaluation framework for sentiment analysis of Mexican tourist destinations [4?
, 5]. The current 2025 edition, as part of the broader IberLEF 2025 evaluation campaign [6], introduces
the novel challenge of Magical Towns identification alongside traditional sentiment analysis tasks [ 7].</p>
      <p>The evolution of the Rest-Mex task reflects the growing sophistication in understanding Spanish
tourism sentiment analysis. Initially introduced in 2022, the task encompassed recommendation systems,
sentiment analysis, and COVID-19 impact prediction for Mexican tourist texts [? ]. The 2023 edition
refined the focus specifically on sentiment analysis for Mexican tourist texts, establishing a foundation
for comprehensive evaluation of approaches ranging from traditional rule-based methods to modern
transformer architectures [5]. The current 2025 iteration introduces the additional complexity of Magical
Town identification, creating a unique multi-task learning challenge that combines sentiment polarity,
destination type classification, and geographical location recognition.</p>
      <p>The proliferation of digital tourism platforms has generated vast repositories of user-generated
content that provide valuable insights into traveler experiences and preferences. However, the analysis
of Spanish-language tourism reviews presents distinctive challenges compared to English-language
sentiment analysis. Previous research has demonstrated the efectiveness of various methodological
approaches, including cascade classifiers for multi-class sentiment analysis [ 8] and unsupervised
rule-based methods adapted for Mexican tourist text processing [9]. These approaches highlight the
domain-specific nature of tourism sentiment analysis and the particular complexities introduced by
Mexican cultural and linguistic nuances. The proliferation of digital tourism platforms has generated
vast repositories of user-generated content that provide valuable insights into traveler experiences and
preferences. The Rest-Mex 2025 competition at IberLEF 2025 addresses the specific challenge of analyzing
Spanish-language reviews of Mexican tourist destinations, requiring simultaneous classification of
multiple attributes from textual content.</p>
      <p>
        The task encompasses three interconnected classification objectives: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) sentiment polarity
determination on a five-point scale from very negative to very positive, (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) destination type classification among
hotels, restaurants, and attractions, and (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) identification of the specific Magical Town (Pueblo Mágico)
from 40 possible locations. This multi-faceted classification problem presents unique opportunities for
multi-task learning approaches that can leverage shared semantic representations.
      </p>
      <p>The Rest-Mex 2025 evaluation employs a sophisticated weighted scoring system that reflects the
relative dificulty and importance of each task. The final evaluation metric combines individual F1-scores
using the formula (2×polarity + 1×type + 3×town)/6, where Magical Town identification receives the
highest weight due to its complexity and importance.</p>
      <sec id="sec-1-1">
        <title>1.1. Contributions</title>
        <p>
          Our contribution presents a comprehensive multi-task architecture that addresses the Rest-Mex 2025
challenge through: (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) a detailed preprocessing pipeline optimized for Spanish tourism text, (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) a
multitask BERT architecture with task-specific classification heads for 40 Magical Towns, (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) equal-weighted
training strategy with competition-aligned evaluation, and (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) comprehensive implementation details
ensuring reproducibility.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. BERT and Multilingual Transformers</title>
        <p>BERT (Bidirectional Encoder Representations from Transformers) introduced by Devlin et al. [10]
revolutionized natural language processing through pre-trained bidirectional transformer architectures.
The multilingual variant, bert-base-multilingual-cased, extends these capabilities to 104 languages,
including Spanish, making it particularly suitable for analyzing Mexican tourism text with regional
variations and cultural references.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Multi-Task Learning in NLP</title>
        <p>Multi-task learning enables simultaneous optimization of multiple related objectives, leading to
improved generalization and reduced computational requirements compared to separate models [11]. Our
approach employs equal weighting during training to ensure balanced learning, while the competition
evaluation applies task-specific importance weights.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Tourism Sentiment Analysis</title>
        <p>Tourism-domain sentiment analysis presents unique challenges including domain-specific vocabulary,
cultural references, and geographical location identification. The Rest-Mex challenge specifically focuses
on Mexican Magical Towns, requiring understanding of cultural and geographical context specific to 40
designated locations as defined by the competition guidelines [12].</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Data Preprocessing Pipeline</title>
        <p>The preprocessing pipeline transforms raw competition data into model-ready tensors through
systematic data cleaning, label encoding, and tokenization procedures. Figure 1 illustrates the complete data
lfow from raw input to training-ready datasets.</p>
        <p>The pipeline initiates with text concatenation using the command data[’Text’]=
data[’Title’].fillna(”) + ’ ’ + data[’Review’].fillna(”) to create
comprehensive textual representations. Label encoding transforms categorical variables into numerical indices:
polarity ratings from 1-5 scale to 0-4 indices, 40 Magical Towns to indices 0-39, and destination types to
indices 0-2 for the three categories.</p>
        <p>Data splitting employs train_test_split with test_size=0.2 and random_state=42 for
reproducibility. BERT tokenization applies max_length=128, padding=’max_length’, and
truncation=True to achieve uniform sequence processing.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Multi-Task Architecture for 40 Magical Towns</title>
        <p>Our multi-task architecture employs BERT-base-multilingual-cased as the shared backbone, extended
with task-specific classification heads for simultaneous prediction of all three target attributes. Figure 2
presents the complete model architecture with correct dimensions.</p>
        <p>The architecture consists of three main components:</p>
        <p>Shared Backbone: The BERT encoder processes tokenized input through 12 transformer layers
with 768-dimensional hidden representations. The pooler_output serves as the aggregate sentence
representation.</p>
        <p>Regularization Layer: nn.Dropout(0.3) is applied to prevent overfitting across the multi-task
objectives.</p>
        <p>
          Task-Specific Heads: Three independent linear layers project the 768-dimensional representation:
• Ranking Head: nn.Linear(
          <xref ref-type="bibr" rid="ref5">768, 5</xref>
          ) for polarity classification
• Town Head: nn.Linear(768, 40) for 40 Magical Towns identification
• Category Head: nn.Linear(
          <xref ref-type="bibr" rid="ref3">768, 3</xref>
          ) for destination type classification
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Training Procedure with Equal-Weighted Loss</title>
        <p>The training procedure employs equal-weighted loss functions during training while preparing for
weighted evaluation. Figure 3 details the complete training workflow.</p>
        <p>
          Equal-Weighted Training Loss: During training, nn.CrossEntropyLoss() is computed
independently for each task, then combined with equal weighting:
 =  +  + 
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
        </p>
        <p>This ensures balanced learning across all tasks without bias toward any particular objective during
the training phase.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Implementation Details</title>
      <p>The implementation utilizes PyTorch [15] for model definition and training, with HuggingFace
Transformers library [? ] for BERT model access and tokenization. The custom RestMexDataset class
extends PyTorch’s Dataset interface to provide eficient data loading and batching capabilities with
num_workers=4 and pin_memory=True for GPU optimization.</p>
      <p>The system generates output in the exact format required by Rest-Mex 2025:</p>
      <p>where line_counter starts from 0, ranking ranges 1-5, town represents one of the 40 Magical
Towns, and category is Attractive, Hotel, or Restaurant.</p>
      <sec id="sec-4-1">
        <title>4.1. Code Availability</title>
        <p>To support reproducibility and enable further research, our implementation and associated resources
are made available through our GitHub repository: https://github.com/Ironsss/Rest-Mex-2025-FisBio_
UNAM/tree/main</p>
        <p>This open-source repository facilitates replication of our experimental methodology and supports
the broader research community’s eforts in Spanish tourism text analysis and multi-task learning
applications.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results and Evaluation</title>
      <sec id="sec-5-1">
        <title>5.1. Training Performance Analysis</title>
        <p>Our implementation employed a 3-epoch training regimen using mixed precision optimization with
torch.cuda.amp.GradScaler. The training was conducted on an 80/20 train-test split of the oficial
Rest-Mex 2025 dataset, processing 10,403 batches per epoch with a batch size of 16.</p>
        <p>The training demonstrated consistent convergence with significant loss reduction between epochs.
The mixed precision training achieved processing rates of approximately 6.58 iterations per second
during training and 8.67 iterations per second during evaluation, indicating eficient GPU utilization.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Internal Validation Results</title>
        <p>Evaluation on our internal test set (20% holdout) yielded the following F1-score macro results:</p>
        <p>The internal validation results demonstrate strong performance on destination type classification,
with moderate performance on polarity and Magical Town identification tasks. The Type prediction
task achieved nearly perfect performance (F1=0.9759), while the Magical Town classification proved
most challenging due to the large number of classes (40 towns).</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Oficial Competition Results</title>
        <p>The oficial evaluation conducted by the Rest-Mex 2025 organizing committee yielded competitive
results, positioning our team (FisBio UNAM) among the top participants. Table 4 presents the complete
competition standings with detailed metrics.</p>
        <p>Our submission achieved an honorable mention (HM) in the competition, demonstrating competitive
performance across all evaluation metrics. Table 5 provides a detailed comparison showing our position
relative to both the winning solution and the baseline.</p>
        <p>Key Performance Insights:
• Competitive Performance: Our solution achieved 91.85% of the winning performance on the
overall track score, demonstrating the efectiveness of our multi-task approach.
• Type Classification Excellence: With only 1.44% diference from the first place on type
prediction (F1=0.9735 vs 0.9877), our model shows exceptional capability in distinguishing between
attractions, hotels, and restaurants.
• Substantial Baseline Improvement: Our approach achieved dramatic improvements over the
baseline across all metrics, with particularly notable gains in Magical Town classification (69.3 ×
improvement).
• Balanced Multi-Task Learning: The consistent performance across all three tasks validates our
equal-weighted training strategy, avoiding the common pitfall of task dominance in multi-task
scenarios.</p>
        <p>To better understand the limitations and systematic failures of our approach, we conducted a detailed
error analysis focusing on the most challenging cases where our multi-task classifier failed
simultaneously across all three prediction tasks. Table 6 presents actual examples from our test set where the
model incorrectly predicted sentiment polarity, magical town, and destination type for the same review.</p>
        <p>Sentiment Polarity Systematic Errors:</p>
        <p>Over-Optimism Bias: The model consistently over-predicted positive sentiment in several cases. For
example, the review describing “amplios jardines y la calidad de sus helados es excelente” (large gardens
and excellent ice cream quality) was predicted as rating 5 instead of the correct rating 4, suggesting the
model may weight positive descriptors too heavily.</p>
        <p>Under-Sensitivity to Negative Cues: Conversely, reviews with subtle negative language were
underpredicted. The phrase “Mal desde el principio” (Bad from the beginning) was predicted as rating 1
instead of rating 2, indicating dificulty in distinguishing between degrees of negative sentiment.</p>
        <p>Mixed Signal Confusion: The most complex case involved a review with both positive descriptions
(“lo bonito que es este lugar”) and negative experiences (“la comida es muy cara”). The model predicted
rating 5 instead of rating 4, demonstrating challenges in balancing contradictory sentiment signals
within the same review.</p>
        <p>Magical Town Classification Systematic Errors:</p>
        <p>Geographical Region Confusion: Several misclassifications occurred between towns in similar
geographical regions. The confusion between “Cuetzalan” and “Palenque” (both featuring archaeological
and cultural attractions), and between “San Cristóbal de las Casas” and “Palenque” (both in Chiapas
state) suggests the model struggles with regional geographical distinctions.</p>
        <p>Tourism Type Overlap: The model confused “Loreto” with “Tulum” (both coastal destinations with
historical significance) and “Valladolid” with “Teotihuacan” (both featuring archaeological heritage),
indicating dificulty distinguishing between towns ofering similar tourism experiences.</p>
        <p>Cultural Activity Confusion: The misclassification of “Cholula” as “Tlaquepaque” appears related
to both locations being known for artisanal activities and cultural events, suggesting the model may
over-rely on activity descriptions rather than location-specific markers.</p>
        <p>Destination Type Classification Systematic Errors:</p>
        <p>Service vs. Location Ambiguity: Multiple reviews mentioning food, service, or hospitality were
misclassified between “Hotel” and “Restaurant” categories. This suggests the model struggles when
reviews discuss multiple service aspects of a location.</p>
        <p>Activity vs. Infrastructure Confusion: Reviews describing attractions with dining facilities were
often misclassified as “Restaurant” instead of “Attractive”, indicating the model may prioritize service
mentions over primary destination purpose.</p>
        <p>Compound Experience Misclassification: The review describing “noche de jazz” and “pizza” options
was misclassified from “Attractive” to “Restaurant”, showing dificulty when attractions ofer multiple
activity types.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Evaluation Methodology</title>
      <p>Following the oficial Rest-Mex 2025 evaluation protocol, the methodology employs F1-score macro
averaging for each individual task, followed by weighted combination for the final competition
metric [16].</p>
      <sec id="sec-6-1">
        <title>6.1. Individual Task Evaluation</title>
        <p>Polarity Classification: F1-score computed across 5 classes {1,2,3,4,5} using
f1_score(ranking_labels_all, ranking_preds_all, average=’macro’) as resp_k.</p>
        <p>Type Prediction: Macro F-measure across 3 classes {Attractive, Hotel, Restaurant} computed as
rest_k.</p>
        <p>Magical Town Classification: Macro F-measure across 40 town classes computed as resmt_k,
representing the most challenging aspect due to the large number of possible locations.</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Final Weighted Evaluation</title>
        <p>
          The competition employs the oficial weighted scoring formula:
_ =
2 × _ + _ + 3 × _
6
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
This weighting reflects the relative importance and dificulty of each task:
• Magical Town identification: 3 × weight (highest complexity)
• Polarity classification: 2 × weight (moderate complexity)
• Type prediction: 1× weight (baseline complexity)
        </p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Technical Justifications</title>
      <sec id="sec-7-1">
        <title>7.1. 40 Magical Towns Classification</title>
        <p>The choice to design the town classification head for exactly 40 classes reflects the oficial competition
specification. The MAGICAL_TOWNS = data[’Town’].unique().tolist() approach ensures
dynamic adaptation to the dataset while maintaining the expected 40-location constraint.</p>
      </sec>
      <sec id="sec-7-2">
        <title>7.2. Equal-Weighted Training Strategy</title>
        <p>The equal-weighted training loss prevents any single task from dominating the learning process,
ensuring the shared BERT representations capture features useful for all classification objectives. The
weighted evaluation is applied only during final scoring, maintaining training balance while respecting
competition criteria.</p>
      </sec>
      <sec id="sec-7-3">
        <title>7.3. Mixed Precision Optimization</title>
        <p>Mixed precision training provides computational eficiency benefits essential for the large-scale BERT
architecture while maintaining numerical stability for the multi-task learning objectives.</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusion</title>
      <p>This paper presents a comprehensive multi-task BERT architecture for the Rest-Mex 2025 sentiment
analysis competition, accurately implementing classification for 40 Magical Towns alongside
sentiment polarity and destination type prediction. The equal-weighted training strategy combined with
competition-aligned weighted evaluation demonstrates an efective approach for balancing academic
multi-task learning principles with practical competition requirements.</p>
      <p>The implementation provides complete technical specifications, ensuring reproducibility while
achieving optimal alignment with the oficial Rest-Mex 2025 evaluation methodology. Our approach
achieved an Honorable Mention with competitive performance (91.85% of the winning solution) and
substantial improvements over baseline (7.4× overall improvement). The detailed error analysis reveals
specific improvement opportunities, particularly in geographical knowledge integration and cultural
context enhancement for Magical Town classification.</p>
      <p>The architecture serves as a robust foundation not only for Spanish tourism text analysis but for any
text sentiment analysis task, establishing a template for similar multi-task competition scenarios. The
open-source availability of our implementation supports reproducibility and enables further research in
Spanish NLP and tourism domain analysis.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>We declare that the present manuscript has been written entirely by the authors and that no generative
artificial intelligence tools were used in its preparation, drafting, or editing.
[10] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers
for language understanding, in: Proceedings of the 2019 conference of the North American chapter
of the association for computational linguistics: human language technologies, volume 1 (long
and short papers), 2019, pp. 4171–4186.
[11] S. Ruder, An overview of multi-task learning in deep neural networks, arXiv preprint
arXiv:1706.05098 (2017).
[12] Centro de Investigación en Matemáticas (CIMAT), Rest-mex 2025: Researching on evaluating
sentiment and textual instances selection for mexican magical towns, https://sites.google.com/
cimat.mx/rest-mex-2025/, 2025. IberLEF 2025 Competition Guidelines.
[13] P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia, B. Ginsburg, M. Houston,
O. Kuchaiev, G. Venkatesh, H. Wu, Mixed precision training, in: International Conference on
Learning Representations (ICLR), 2018.
[14] I. Loshchilov, F. Hutter, Decoupled weight decay regularization, in: International Conference on</p>
      <p>Learning Representations (ICLR), 2019.
[15] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein,
L. Antiga, et al., Pytorch: An imperative style, high-performance deep learning library, in:
Advances in Neural Information Processing Systems (NeurIPS), 2019, pp. 8024–8035.
[16] M. Sokolova, G. Lapalme, A systematic analysis of performance measures for classification tasks,</p>
      <p>Information Processing &amp; Management 45 (2009) 427–437.
T o</p>
      <p>H
u
q
T n
P le
a
n
a
l
a
T z
T te</p>
      <p>u
t
n
l a
e r
t u
o a</p>
      <p>t
H s</p>
      <p>e
5 1 2 3 4
4
2</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Álvarez-Carmona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fajardo-Delgado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Pérez-Espinosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Martínez-Miranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <article-title>Ángel DíazPacheco, Natural language processing applied to tourism research: A systematic review and future research directions</article-title>
          ,
          <source>Journal of King Saud University - Computer and Information Sciences</source>
          <volume>34</volume>
          (
          <year>2022</year>
          )
          <fpage>10125</fpage>
          -
          <lpage>10144</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S1319157822003615. doi:https://doi.org/10.1016/j.jksuci.
          <year>2022</year>
          .
          <volume>10</volume>
          .010.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Á.</given-names>
            <surname>Díaz-Pacheco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-GonzÁlez</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
          </string-name>
          ,
          <article-title>A comprehensive deep learning approach for topic discovering and sentiment analysis of textual information in tourism</article-title>
          ,
          <source>Journal of King Saud University - Computer and Information Sciences</source>
          <volume>35</volume>
          (
          <year>2023</year>
          )
          <article-title>101746</article-title>
          . URL: http://dx.doi.org/10.1016/j.jksuci.
          <year>2023</year>
          .
          <volume>101746</volume>
          . doi:
          <volume>10</volume>
          .1016/j. jksuci.
          <year>2023</year>
          .
          <volume>101746</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Olmos-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Díaz-Pacheco</surname>
          </string-name>
          ,
          <article-title>What does the media tell us about a destination? the cancun case, seen from the usa, canada, and mexico</article-title>
          ,
          <source>International Journal of Tourism Cities</source>
          <volume>10</volume>
          (
          <year>2023</year>
          )
          <fpage>639</fpage>
          -
          <lpage>661</lpage>
          . URL: http://dx.doi.org/10.1108/IJTC-09-2022-0223. doi:
          <volume>10</volume>
          .1108/ijtc-09-2022-0223.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Arce-Cárdenas</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fajardo-Delgado</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Guerrero-Rodríguez</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>López-Monroy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Martínez-Miranda</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Pérez-Espinosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2021: Recommendation system for text mexican tourism</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>67</volume>
          (
          <year>2021</year>
          ). doi:https://doi.org/10.26342/2021-67-14.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Muñis-Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Pastor-López</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sánchez-Vega</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2023: Research on sentiment analysis task for mexican tourist texts</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>71</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>González-Barba</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <string-name>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          ,
          <article-title>Overview of IberLEF 2025: Natural Language Processing Challenges for Spanish and other Iberian Languages, in: Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2025), co-located with the 41st Conference of the Spanish Society for Natural Language Processing (SEPLN 2025), CEUR-WS</article-title>
          . org,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Herrera-Semenets</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2025: Researching sentiment evaluation in text for mexican magical towns</article-title>
          , volume
          <volume>75</volume>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Abreu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mirabal</surname>
          </string-name>
          ,
          <article-title>Cascade of biased two-class classifiers for multi-class sentiment analysis</article-title>
          ,
          <source>in: Proceedings of the Third Workshop for Iberian Languages Evaluation Forum (IberLEF</source>
          <year>2021</year>
          ), volume
          <volume>2943</volume>
          <source>of CEUR Workshop Proceedings</source>
          , Málaga, Spain,
          <year>2021</year>
          , pp.
          <fpage>185</fpage>
          -
          <lpage>191</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kellert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Uz</given-names>
            <surname>Zaman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. H.</given-names>
            <surname>Hill Matlis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gómez-Rodríguez</surname>
          </string-name>
          ,
          <article-title>Experimenting with ud adaptation of an unsupervised rule-based approach for sentiment analysis of mexican tourist texts</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          , volume
          <volume>3496</volume>
          ,
          <year>2023</year>
          , pp.
          <fpage>216</fpage>
          -
          <lpage>225</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>