<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Hybrid Prompt Engineering and Transfer Learning for Sentiment Analysis in Mexican Tourism Reviews</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Isaias Siliceo-Guzmán</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ramón Aranda</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Miguel Ángel Álvarez-Carmona</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Centro de Investigación en Matemáticas (CIMAT)</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>México.</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Opinion analysis has become a crucial tool for understanding public sentiment across a wide range of domains, including the tourism industry In this study, we propose a deep learning approach for multitask classification of Spanish-language tourist reviews, leveraging the Rest-Mex 2025 dataset. We employ a pre-trained Transformer model, BETO, extended with a multi-head architecture capable of jointly predicting sentiment polarity, tourist town, and type of establishment. The textual data undergoes extensive preprocessing and label encoding. Our model achieves strong performance, notably in the classification of establishment type (  1macro = 0.976) and competitive results in town prediction ( 1macro = 0.623), a task involving 40 distinct classes. These results underscore the power of multi-head Transformers in complex, domain-specific NLP tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Sentiment Analysis</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Rest-Mex Track</kwd>
        <kwd>IberLEF 2025</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The widespread adoption of user-generated content on platforms such as TripAdvisor, Booking.com,
and Google Reviews has created an unprecedented opportunity to understand tourist behavior, service
perception, and destination appeal at scale [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ]. These rich textual narratives—often emotional,
culturally situated, and informal—represent a valuable source of data for public policy, marketing
strategies, and intelligent tourism systems [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. In the case of Mexico, whose cultural and ecological
diversity positions it as one of the world’s most visited destinations, tourism reviews ofer a window
into localized perceptions and afective evaluations that often go unnoticed in aggregate metrics [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6, 7, 8</xref>
        ].
      </p>
      <p>
        Natural Language Processing (NLP) methods, especially sentiment analysis, have become central to
tourism analytics. The Rest-Mex shared task series [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11, 12</xref>
        ] has served as a leading benchmark
for this field, ofering large-scale, annotated datasets for the classification of opinion polarity, type of
service, and geographic mention in Spanish-language reviews. In its 2025 edition, the task introduced
ifne-grained town classification by including 40 Mexican “Pueblos Mágicos,” raising the challenge of
detecting subtle geographic cues in natural language [13].
      </p>
      <p>Most systems participating in earlier editions of Rest-Mex relied on supervised learning using
finetuned Transformer-based models such as BETO [14]. Among them, the model
vg055/roberta-basebne-finetuned-e2-RestMex2023-polarity1 stood out as the top performer in 2023,
demonstrating strong generalization in polarity detection by leveraging Spanish-specific embeddings trained on
domain-relevant corpora. However, despite their efectiveness, such models require costly fine-tuning
and may not generalize well to unseen or context-shifted tasks.</p>
      <p>In parallel, the rise of large language models (LLMs) like GPT-3, GPT-4, and PaLM has enabled a new
paradigm based on prompt engineering, where task formulations are embedded directly into natural
language instructions. Prompt-based methods allow zero-shot or few-shot adaptation without modifying
model parameters, ofering an attractive alternative for rapid deployment and experimentation. Prior
work has shown that these models can perform reasonably well in sentiment classification and even in
low-resource scenarios—albeit with limitations in recall and task specificity [15].</p>
      <p>Yet, prompt engineering alone often fails to capture the full depth of semantic nuance and class
granularity needed for tasks such as polarity disambiguation or town detection, particularly in highly
imbalanced datasets like those in Rest-Mex. This suggests the need for a hybrid approach—one that
combines the contextual reasoning power of LLMs with the domain-specific embeddings of fine-tuned
Transformers.</p>
      <p>In this work, we propose such a hybrid framework. Our method extracts the [CLS] representation
from the final hidden layer of the 2023-winning RoBERTa-BNE model and concatenates it with the
instruction-based output embedding generated by a prompted LLM. The resulting vector is used as
input to a lightweight classifier capable of performing multi-label classification over polarity, type, and
town categories [16].</p>
      <p>Our hypothesis is that combining representations from two diferent paradigms—one grounded in
domain-specific training, the other in general-purpose reasoning—can lead to improved performance
across tasks that require both linguistic adaptability and class-level precision. This idea aligns with
recent trends in representation fusion, multi-view learning, and hybrid transformer architectures.</p>
      <p>Through extensive evaluation on the Rest-Mex 2025 benchmark, we demonstrate that our hybrid
model consistently outperforms both standalone fine-tuned models and purely prompt-based systems.
The approach achieves strong results in all three subtasks, most notably in type classification (F1 macro =
0.981) and town prediction (F1macro = 0.634), validating the synergy of transfer learning and prompt
engineering for sentiment analysis in tourism.</p>
    </sec>
    <sec id="sec-2">
      <title>2. State of the Art</title>
      <p>Sentiment analysis has become a foundational task in Natural Language Processing (NLP), particularly in
domains like tourism where understanding public opinion is crucial for service improvement, marketing,
and policy-making. Over the last decade, research has shifted from rule-based and lexicon methods to
contextual deep learning models like BERT and RoBERTa, which excel in capturing subtleties in human
language [17].</p>
      <p>
        In the Spanish tourism domain, the Rest-Mex Shared Task has served as a key benchmark since
its inception. The first edition in 2021 focused on two tasks: predicting user satisfaction and polarity
classification from TripAdvisor reviews in Mexico [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. In 2022, the second edition introduced a new
challenge: classifying COVID-19 risk levels from news texts, alongside the original reviews-based tasks
[
        <xref ref-type="bibr" rid="ref10">10, 18</xref>
        ]. The third edition in 2023 expanded geographically to include reviews from Cuba and Colombia,
added clustering as an unsupervised task, and continued emphasizing polarity and type classification,
with Transformer-based approaches like BETO and RoBERTa-BNE claiming top ranks [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. The 2025
edition marks the fourth iteration, adding a more granular third task: identifying one of 40 designated
“Pueblos Mágicos” in Mexico, thus combining sentiment, service type, and fine-grained geographic
classification [13].
      </p>
      <p>Most top-performing entries in the first three editions relied on supervised fine-tuning of
Transformerbased models, achieving strong results in handling imbalanced, noisy datasets. Notably, the RoBERTa-BNE
model fine-tuned on Rest -Mex 2023 (winning the polarity task) demonstrated high accuracy and
robustness in sentiment detection, underscoring the importance of domain-adapted embeddings.</p>
      <p>Concurrently, the rise of prompt engineering with large language models (LLMs), such as GPT-3,
GPT-4, and PaLM—has introduced flexible alternatives that perform tasks through carefully engineered
textual prompts instead of parameter updates. These models have shown promise in sentiment analysis
across languages and low-resource contexts [15], although they often struggle with class imbalance
and nuanced distinctions.</p>
      <p>To overcome these limitations, hybrid or fusion approaches have been explored, combining
embeddings from pre-trained, fine-tuned Transformers with representations derived from prompt-based LLMs.
Research in this area has highlighted the efectiveness of multi-view learning and adapter-based model
fusion for enhancing classification performance [19].</p>
      <p>Given the complexity of Rest-Mex 2025—with its high-class imbalance, multilingual reviews, and
multi-faceted tasks—a hybrid method that merges embeddings from a specialist model (RoBERTa-BNE)
with prompt-informed LLM representations is particularly promising. This fusion aims to harness the
strengths of both approaches: the discriminative power of domain-specific embeddings and the general
reasoning capability of prompt-based models.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>This section outlines our hybrid framework that combines transfer learning from a fine-tuned
Transformer and prompt engineering with LLMs. We begin by summarizing the Rest-Mex 2025 dataset, then
describe feature extraction, representation fusion, classification strategy, and evaluation metrics.</p>
      <sec id="sec-3-1">
        <title>3.1. Dataset Overview</title>
        <p>We use the oficially published Rest-Mex 2025 dataset, which consists of 208,051 Spanish-language
tourist reviews annotated across three tasks: sentiment polarity (scale 1–5), type of establishment (Hotel,
Restaurant, Attractive), and identification of one of 40 Mexican “Pueblos Mágicos.” Table 1 summarizes
the class distributions.
2.62%
2.64%
7.46%
21.65%
65.63%</p>
        <p>100%
24.72%
41.68%
33.60%</p>
        <p>100%</p>
        <sec id="sec-3-1-1">
          <title>Polarity</title>
          <p>1 (Very negative)
2
3
4
5 (Very positive)</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Total polarity samples</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>Type</title>
          <p>Hotel
Restaurant
Attractive</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Feature Extraction and Model Pipeline</title>
        <p>Our hybrid system pipeline consists of the following steps:
1. RoBERTa-BNE CLS features: We extract the [CLS] vector from the final hidden layer of the
pre-trained model vg055/roberta-base-bne-finetuned-e2-RestMex2023-polarity using the review
title and body as input. This model was the Rest-Mex 2023 polarity task winner.
2. Prompt-based LLM features: We prompt a llama model with a designed instruction (zero- or
few-shot) asking for sentiment polarity. We capture the output embedding from the model’s final
layer before token decoding.
3. Concatenation: The two feature vectors are concatenated to form a combined embedding,
enhancing both domain-specific and general reasoning representations.
4. Classification Heads : A multi-layer perceptron (MLP) is trained using these fused embeddings
to predict the three tasks simultaneously (polarity, type, town). The MLP is lightweight, allowing
quick training convergence.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Training and Evaluation</title>
        <p>We split the data into 80% training and 20% validation sets using stratified sampling. We then train the
MLP for 5 epochs with a batch size of 32, a learning rate of 1e-4, and early stopping based on validation
macro F1.</p>
        <p>Our evaluation metrics include macro-averaged F1-scores and accuracy for each task. We also report
per-class F1 for the top 10 towns to assess geographic classification performance.</p>
        <p>The entire pipeline is trained end-to-end on concatenated embeddings without modifying weights of
either the RoBERTa-BNE model or the LLM, thus blending transfer learning and prompt engineering in
a lightweight manner.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>We evaluated our hybrid approach on the oficial Rest-Mex 2025 test set. Table 2 presents the main
performance metrics, including macro F1-score and accuracy for the three tasks: polarity, service type,
and town classification. Our results show significant improvements over prompt-only baselines across
all tasks, confirming the value of combining domain-specific and instruction-based representations.</p>
      <sec id="sec-4-1">
        <title>Task</title>
        <p>Polarity (1–5)
Type (Hotel, Restaurant, Attractive)
Town (40 classes)</p>
      </sec>
      <sec id="sec-4-2">
        <title>Macro F1-score Accuracy</title>
        <p>0.616
0.981
0.634
0.686
0.983
0.724</p>
        <sec id="sec-4-2-1">
          <title>4.1. Polarity Classification</title>
          <p>Our system achieved a macro F1-score of 0.616 for the polarity task, a substantial improvement over
previous prompt-based approaches (which scored below 0.20). Precision and recall scores were well
balanced across most classes, with particularly strong performance on the frequent classes 4 and 5.
This indicates that the concatenated representation efectively combines the domain sensitivity of
RoBERTa-BNE with the generative reasoning of the LLM.</p>
          <p>Moreover, the mean absolute error (MAE) for polarity was significantly reduced, showing the model’s
capacity to better approximate sentiment intensity across the full 5-point scale.</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>4.2. Type Classification</title>
          <p>Type classification yielded near-perfect results, with a macro F1-score of 0.981 and accuracy of 98.3%.
The model was especially accurate in distinguishing between restaurants and attractions, a task that
often involves subtle lexical diferences. These high scores suggest that the hybrid embeddings capture
service-type cues efectively, possibly due to lexical regularities in tourism reviews that are well learned
by both underlying models.</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>4.3. Town Classification</title>
          <p>Town classification—arguably the most challenging of the three tasks due to 40-class imbalance and
subtle geographic references—also saw significant gains. The system achieved a macro F1-score of 0.634
and accuracy of 72.4%, outperforming both fine-tuned and prompt-only baselines by a wide margin.</p>
          <p>Per-town F1-scores for the top 10 classes (e.g., Tulum, Isla Mujeres, San Cristóbal de las Casas)
remained consistently above 0.70, with particularly high precision in towns with strong lexical anchors
or repeated mentions. This supports the hypothesis that the fused representation provides more robust
grounding for geographic disambiguation.</p>
        </sec>
        <sec id="sec-4-2-4">
          <title>4.4. Comparative Summary</title>
          <p>In comparison with the prompt-only model reported in previous experiments (macro F1: 0.199 for
polarity, 0.333 for type, 0.025 for town), our hybrid system improved performance by:
• +0.417 in polarity F1-score,
• +0.648 in type F1-score,
• +0.609 in town F1-score.</p>
          <p>These improvements validate the efectiveness of our architecture, especially in multi-label
classification scenarios where domain adaptation and generalization must co-exist.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this study, we introduced a hybrid approach for sentiment and thematic classification of
Spanishlanguage tourist reviews, leveraging both prompt engineering and transfer learning. By combining the
[CLS] embeddings from the Rest-Mex 2023-winning RoBERTa-BNE model with contextual
representations generated via large language model (LLM) prompts, we created a fused feature space capable of
capturing both domain-specific semantics and general-purpose reasoning.</p>
      <p>Our results on the Rest-Mex 2025 test set show that this architecture significantly improves macro
F1-scores across all tasks—polarity, service type, and town identification—when compared to
promptonly or single-model strategies. The proposed system achieved a polarity F1-score of 0.616, a type
classification F1-score of 0.981, and a town classification F1-score of 0.634, demonstrating robust
performance even in highly imbalanced, multi-class scenarios.</p>
      <p>These findings confirm the efectiveness of combining two paradigms: (1) transfer learning, which
ofers strong inductive biases and stable representations for in-domain data, and (2) prompt engineering,
which introduces adaptability, semantic flexibility, and task-awareness without retraining. The modular
nature of our architecture also makes it scalable, adaptable to multilingual settings, and suitable for
real-world applications in tourism analytics.</p>
      <p>Future work may explore joint fine-tuning of the concatenated representation, incorporation of
external geographic or ontological resources, and dynamic prompt optimization. Our hybrid pipeline
serves as a practical and powerful solution for sentiment analysis in low-resource, domain-specific
contexts—paving the way for more equitable and accurate language technologies in Spanish and beyond.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The authors gratefully acknowledge the support provided by the Mexican Academy of Tourism Research
(AMIT) for the project “Balancing Tourism Text Data with Artificial Intelligence for Sentiment Analysis: A
Specialized Language Model Approach” funded through the Research Projects 2024 call. Additionally, this
work was also supported by the project “Text Generation for Data Balancing in Sentiment Classification:
Application to Tourism Data” under the CICIMPI 2024 call of the Centro de Investigación en Matemáticas
(CIMAT).</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>We declare that the present manuscript has been written entirely by the authors and that no generative
artificial intelligence tools were used in its preparation, drafting, or editing.
[12] J. Á. González-Barba, L. Chiruzzo, S. M. Jiménez-Zafra, Overview of IberLEF 2025: Natural
Language Processing Challenges for Spanish and other Iberian Languages, in: Proceedings of the
Iberian Languages Evaluation Forum (IberLEF 2025), co-located with the 41st Conference of the
Spanish Society for Natural Language Processing (SEPLN 2025), CEUR-WS. org, 2025.
[13] M. Á. Álvarez-Carmona, Á. Díaz-Pacheco, R. Aranda, A. Y. Rodríguez-González, L. Bustio-Martínez,
V. Herrera-Semenets, Overview of rest-mex at iberlef 2025: Researching sentiment evaluation in
text for mexican magical towns, volume 75, 2025.
[14] V. G. Morales-Murillo, H. Gómez-Adorno, D. Pinto, I. A. Cortés-Miranda, P. Delice, Lke-iimas
team at rest-mex 2023: Sentiment analysis on mexican tourism reviews using transformer-based
domain adaptation (2023).
[15] K. I. Roumeliotis, N. D. Tselikas, Chatgpt and open-ai models: A preliminary review, Future</p>
      <p>Internet 15 (2023) 192.
[16] M. Á. Álvarez-Carmona, R. Aranda, R. Guerrero-Rodríguez, A. Y. Rodríguez-González, A. P.
LópezMonroy, A combination of sentiment analysis systems for the study of online travel reviews:
Many heads are better than one, Computación y Sistemas 26 (2022) 977–987.
[17] A. B. García-Gutiérrez, P. E. López-Ávila, P. A. Gallegos-Ávila, R. Aranda, M. Á. Álvarez-Carmona,</p>
      <p>Balancing of tourist opinions for sentiment analysis task., in: IberLEF@ SEPLN, 2023.
[18] M. Á. Alvarez-Carmona, R. Aranda, Determinación automática del color del semáforo mexicano
del covid-19 a partir de las noticias (2022).
[19] M. Á. Álvarez-Carmona, E. Villatoro-Tello, L. Villaseñor-Pineda, M. Montes-y Gómez, Classifying
the social media author profile through a multimodal representation, in: Intelligent Technologies:
Concepts, Applications, and Future Directions, Springer, 2022, pp. 57–81.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>López-Monroy</surname>
          </string-name>
          ,
          <article-title>Studying online travel reviews related to tourist attractions using nlp methods: the case of guanajuato, mexico</article-title>
          ,
          <source>Current issues in tourism 26</source>
          (
          <year>2023</year>
          )
          <fpage>289</fpage>
          -
          <lpage>304</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Diaz-Pacheco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Guerrero-Rodríguez</surname>
            ,
            <given-names>L. A. C.</given-names>
          </string-name>
          <string-name>
            <surname>Chávez</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          <string-name>
            <surname>RodríguezGonzález</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          <string-name>
            <surname>Ramírez-Silva</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence methods to support the research of destination image in tourism. a systematic review</article-title>
          ,
          <source>Journal of Experimental &amp; Theoretical Artificial Intelligence</source>
          <volume>36</volume>
          (
          <year>2024</year>
          )
          <fpage>1415</fpage>
          -
          <lpage>1445</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-Gonzalez</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fajardo-Delgado</surname>
            ,
            <given-names>M. G.</given-names>
          </string-name>
          <string-name>
            <surname>Sánchez</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Pérez-Espinosa</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Martínez-Miranda</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Guerrero-Rodríguez</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <article-title>Bustio-Martínez, Á. DíazPacheco, Natural language processing applied to tourism research: A systematic review and future research directions</article-title>
          ,
          <source>Journal of king Saud university-computer and information sciences 34</source>
          (
          <year>2022</year>
          )
          <fpage>10125</fpage>
          -
          <lpage>10144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Diaz-Pacheco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Álvarez-Carmona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Carlos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <article-title>Measuring the diference between pictures from controlled and uncontrolled sources to promote a destination. a deep learning approach (</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Á.</given-names>
            <surname>Díaz-Pacheco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-González</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
          </string-name>
          ,
          <article-title>Quantifying diferences between ugc and dmo's image content on instagram using deep learning</article-title>
          ,
          <source>Information Technology &amp; Tourism</source>
          <volume>26</volume>
          (
          <year>2024</year>
          )
          <fpage>293</fpage>
          -
          <lpage>329</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E. P.</given-names>
            <surname>Ramirez-Villaseñor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Pérez-Espinosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Álvarez-Carmona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <article-title>Design, development, and evaluation of a chatbot for hospitality services assistance in spanish</article-title>
          ,
          <source>Acta universitaria 33</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Arce-Cardenas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fajardo-Delgado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          <string-name>
            <surname>Ramírez-Silva</surname>
          </string-name>
          ,
          <article-title>A tourist recommendation system: a study case in mexico</article-title>
          ,
          <source>in: Mexican international conference on artificial intelligence</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>184</fpage>
          -
          <lpage>195</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Olmos-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Díaz-Pacheco</surname>
          </string-name>
          ,
          <article-title>What does the media tell us about a destination? the cancun case, seen from the usa, canada, and mexico</article-title>
          ,
          <source>International Journal of Tourism Cities</source>
          <volume>10</volume>
          (
          <year>2024</year>
          )
          <fpage>639</fpage>
          -
          <lpage>661</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Arce-Cárdenas</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fajardo-Delgado</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Guerrero-Rodríguez</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>López-Monroy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Martínez-Miranda</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Pérez-Espinosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2021: Recommendation system for text mexican tourism</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>67</volume>
          (
          <year>2021</year>
          ). doi:https://doi.org/10.26342/2021-67-14.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fajardo-Delgado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2022: Recommendation system, sentiment analysis and covid semaphore prediction for mexican tourist texts</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>69</volume>
          (
          <year>2022</year>
          )
          <fpage>289</fpage>
          -
          <lpage>299</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Muñiz-Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>López-Monroy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sánchez-Vega</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2023: Research on sentiment analysis task for mexican tourist texts</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>71</volume>
          (
          <year>2023</year>
          )
          <fpage>425</fpage>
          -
          <lpage>436</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>