<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Multi-Dimensional Classification System using Pre-Trained Transformer Models for Multilingual Text Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Emmanuel Arturo Torres-Santana</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manuel Enrique Balan-Euan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Instituto Tecnológico de Mérida, Computer Systems Engineering</institution>
          ,
          <addr-line>Mérida, Yucatán</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>This document presents a multi-dimensional classification system that integrates three pre-trained Transformer models for comprehensive text analysis. The system combines zero-shot classification using XLM-RoBERTa, sentiment analysis with multilingual BERT, and named entity recognition with Spanish BERT. The proposed architecture enables simultaneous classification across multiple dimensions, providing a robust solution for natural language processing in multilingual applications. Experimental results demonstrate the efectiveness of the ensemble approach for complex classification tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Natural Language Processing</kwd>
        <kwd>Transformer Models</kwd>
        <kwd>Multi-Dimensional Classification</kwd>
        <kwd>Sentiment Analysis</kwd>
        <kwd>Entity Recognition</kwd>
        <kwd>Zero-Shot Classification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Natural Language Processing (NLP) has experienced significant advances with the introduction of
Transformer architectures and pre-trained models. These models have demonstrated exceptional
capabilities across a wide range of tasks, from text understanding to language generation. However, many
real-world applications require multidimensional text analysis that goes beyond a single classification
task [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5">1, 2, 3, 4, 5</xref>
        ].
      </p>
      <p>
        This work presents an integrated system that combines three specialized Transformer models to
provide comprehensive text analysis across multiple dimensions: zero-shot thematic classification,
sentiment analysis, and named entity recognition. The main motivation is to create a unified solution
that can extract rich and multifaceted information from texts in diferent languages, particularly focused
on Spanish and other resource-limited languages [
        <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6, 7, 8, 9</xref>
        ].
      </p>
      <p>The proposed system utilizes three main components: XLM-RoBERTa for zero-shot classification,
multilingual BERT for sentiment analysis, and specialized Spanish BERT for named entity recognition.
This combination allows simultaneous addressing of diferent aspects of textual analysis, providing a
more complete understanding of the analyzed content.</p>
      <p>The main challenges include efective integration of multiple models, computational complexity
management, and performance optimization for real-time applications. Our approach addresses these
challenges through a modular architecture that enables parallel execution and intelligent result
aggregation.</p>
      <p>Experimental results demonstrate that the ensemble system significantly outperforms individual
models in complex classification tasks while maintaining acceptable computational eficiency for
practical applications.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Text classification has evolved from traditional methods based on manual features to deep approaches
based on neural networks. Transformer models, introduced by Vaswani et al., revolutionized the NLP
ifeld by enabling parallel processing and capturing long-range dependencies more efectively than
recurrent architectures.</p>
      <p>BERT (Bidirectional Encoder Representations from Transformers) marked a milestone by introducing
bidirectional pre-training, allowing models to capture context from both left and right sides of each
token. Its multilingual variants, such as mBERT and XLM-RoBERTa, extended these capabilities to
multiple languages, demonstrating efective cross-lingual transfer.</p>
      <p>Named entity recognition (NER) has traditionally been approached as a sequential labeling task. BERT
models have shown excellent performance in NER by combining deep contextual representations with
specialized classification layers. For Spanish, models like BETO and specific variants have demonstrated
significant improvements over previous methods.</p>
      <p>Zero-shot classification represents an emerging paradigm where models can classify text into
categories not seen during training. This is particularly valuable for applications where defining categories
a priori is dificult or where categories change dynamically.</p>
      <p>Ensemble systems that combine multiple specialized models have shown consistent advantages over
individual models, albeit with the cost of greater computational complexity. Our work contributes to
this line of research by proposing a specific architecture for integrating specialized Transformer models.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Tourism Context</title>
      <p>
        This study is based on the Rest-Mex 2025 corpus [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ], a large-scale dataset of tourist reviews
focused on the most iconic and visited towns across Mexico. The dataset consists of 208,051 annotated
entries, each representing a tourist’s opinion and metadata collected from multiple sources. It was
released as part of the "Sentiment Analysis Magical Mexican Towns" research initiative, and is intended
exclusively for academic and research purposes.
      </p>
      <p>Each review includes a title, a textual review, and three key labels:
• Polarity: A sentiment score from 1 (very dissatisfied) to 5 (very satisfied).
• Type: The category of place described, labeled as Hotel, Restaurant, or Attractive.
• Geographic Location: The name of the town and its corresponding region (state) in Mexico.</p>
      <p>The corpus spans opinions from 40 carefully selected Mexican towns, such as Tulum, Isla Mujeres,
San Cristóbal de las Casas, and Valladolid — places known for their cultural, historical, or ecological
significance. These locations are distributed across 40 diferent states of Mexico, reflecting a wide
geographic and touristic diversity. Tulum alone accounts for over 45,000 reviews, while towns like
Tapalpa and Real de Catorce have under 1,000 reviews each, illustrating a natural class imbalance in the
distribution of the data.</p>
      <p>From a classification perspective, this dataset presents a multi-label challenge that involves:
1. Assigning a sentiment polarity (ordinal classification).
2. Identifying the type of business or site (nominal classification).
3. Predicting the corresponding municipality and state (geographic classification).</p>
      <p>Such a setting closely simulates real-world tourism dynamics where both subjective experiences
(e.g., satisfaction) and structured information (e.g., location and service type) coexist. The dataset not
only enables experiments in multilingual NLP and sentiment analysis but also provides a practical
foundation for regional tourism recommendation systems or public policy insights.</p>
      <p>All preprocessing and model training in this research strictly adhere to the terms of academic use
specified by the Rest-Mex 2025 initiative.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Natural Language Processing</title>
      <p>To ensure high-quality input for training our classification models, a comprehensive text preprocessing
pipeline was implemented. The process involved several Natural Language Processing (NLP) libraries
and custom strategies to clean and normalize Spanish-language tourist reviews.</p>
      <p>Initially, each review was transformed to lowercase and stripped of trailing whitespace. We used
regular expressions to replace date patterns and long numerical sequences with placeholder characters
to prevent misleading token frequency patterns. Then, we applied two normalization strategies from
the spanlp library: NumbersToVowelsInLowerCase and NumbersToConsonantsInLowerCase.
These strategies replaced digits with semantically plausible letters, which helped maintain syntactic
structure without numerical noise.</p>
      <p>Accents and diacritics were removed using Unicode normalization (unicodedata), followed by
ifltering all non-alphabetical characters. Tokenization was performed using the NLTK library, and
Spanish stopwords were removed based on an extended list. After this step, we applied lemmatization
using the spaCy model es_core_news_sm, which converts each token to its base form, improving
the semantic consistency of inputs.</p>
      <p>This preprocessing logic was encapsulated in a class named Preprocesador, which was used to
transform each row in the dataset before being written to three FastText-compatible training files: one
for sentiment polarity, one for business type (restaurant, hotel, or attraction), and one for geographic
location (state and municipality). Only entries with valid and non-empty reviews and labels were
included.</p>
      <p>The resulting processed datasets were saved in plain text files following the FastText input format,
with lines like:
__label__positivo excelente comida y servicio atencion rapida
__label__hotel lugar tranquilo y limpio en el centro
__label__yucatan-merida paseo cultural interesante y economico</p>
      <p>These datasets were then used to train three independent classification models using the FastText
library, achieving eficient training with n-gram features and dimensional embeddings. All these steps
were crucial in preparing data that is both clean and semantically rich for downstream classification
tasks.</p>
      <p>This FastText-compatible structure allowed eficient supervised training using n-gram word
representations and low-dimensional embeddings, while remaining computationally lightweight and suitable
for large-scale experimentation.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Methodology</title>
      <sec id="sec-5-1">
        <title>5.1. System Architecture</title>
        <p>The proposed system consists of three main components that operate in a coordinated manner to
provide multidimensional analysis of input text. The architecture was designed following principles of
modularity and scalability.</p>
        <sec id="sec-5-1-1">
          <title>5.1.1. Zero-Shot Classification Component</title>
          <p>We utilize the joeddav/xlm-roberta-large-xnli model through the Hugging Face Transformers
library. This model, based on XLM-RoBERTa, was trained on the XNLI (Cross-lingual Natural
Language Inference) corpus and is capable of performing classification without prior examples in multiple
languages.</p>
          <p>Zero-shot classification is implemented through natural language inference hypothesis formulation.
For each candidate category, a hypothesis of the type "This text is about [category]" is constructed and
the probability of logical implication between the input text and the hypothesis is calculated.</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>5.1.2. Sentiment Analysis Component</title>
          <p>Sentiment analysis is performed using the nlptown/bert-base-multilingual-uncased-sentiment
model, which provides sentiment classification on a 5-point scale (1-5 stars). This model was specifically
trained on multilingual reviews and demonstrates robustness across diferent textual domains.</p>
        </sec>
        <sec id="sec-5-1-3">
          <title>5.1.3. Entity Recognition Component</title>
          <p>For NER, we employ mrm8488/bert-spanish-cased-finetuned-ner with simple aggregation
strategy. This model is specifically fine-tuned for Spanish and recognizes standard entities such as
persons, organizations, locations, and other relevant categories.</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Processing Pipeline</title>
        <p>Algorithm 1 describes the complete processing flow:
Algorithm 1 Multi-Dimensional Classification Pipeline
1: Input: Input text  , candidate categories 
2: Initialize models: , , 
3:
4: // Parallel processing
(, ) // Zero-shot classification
( ) // Sentiment analysis
( ) // Entity recognition
5:  ←
6:  ←
7:  ←
8:
9: // Result aggregation
10:  ← {
11: ” ” : ,
12: ”” : ,
13: ”” : 
14: }
15: Return</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Result Aggregation</title>
        <p>The integration of results from the three models is performed through a unified data structure that
preserves task-specific information while providing a coherent interface for downstream applications.</p>
        <p>For global system evaluation, we define a composite metric that considers performance across all
three dimensions:
 =  ·  1 +  ·  1 +  ·  1
(1)
where  +  +  = 1 and weights are adjusted according to the relative importance of each task in
the specific application.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Implementation</title>
      <sec id="sec-6-1">
        <title>6.1. Technical Configuration</title>
        <p>The system was implemented in Python using the following main libraries:
• Transformers (v4.21.0): For loading and executing pre-trained models
• PyTorch (v1.12.0): Underlying deep learning framework
• NumPy (v1.23.0): Numerical operations and array manipulation
• Pandas (v1.4.3): Data processing and analysis</p>
      </sec>
      <sec id="sec-6-2">
        <title>6.2. Model Initialization</title>
        <p>Model loading was optimized to minimize initialization time and memory usage:</p>
        <p>Listing 1: System Initialization
from t r a n s f o r m e r s import p i p e l i n e
# Z e r o − s h o t c l a s s i f i e r
z e r o _ s h o t _ c l a s s i f i e r = p i p e l i n e (
" z e r o − s h o t − c l a s s i f i c a t i o n " ,
model = " j o e d d a v / xlm − r o b e r t a − l a r g e − x n l i "
# S e n t i m e n t a n a l y z e r
s e n t i m e n t _ c l a s s i f i e r = p i p e l i n e (
" s e n t i m e n t − a n a l y s i s " ,
model = " n l p t o w n / b e r t − b a s e − m u l t i l i n g u a l − u n c a s e d − s e n t i m e n t "
# Named e n t i t y r e c o g n i z e r
n e r = p i p e l i n e (
" n e r " ,
model = " mrm8488 / b e r t − s p a n i s h − c a s e d − f i n e t u n e d − n e r " ,
a g g r e g a t i o n _ s t r a t e g y = " s i m p l e "</p>
      </sec>
      <sec id="sec-6-3">
        <title>6.3. Performance Optimizations</title>
        <p>To improve system eficiency, we implemented several optimizations:
)
)
)
1. Batch processing: Texts are processed in groups to leverage GPU parallelization
2. Result caching: Results from previously processed texts are stored to avoid re-computation
3. Intelligent truncation: Long texts are truncated while preserving semantically relevant
information
4. Model quantization: Numerical precision reduction to accelerate inference</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Experimental Evaluation</title>
      <sec id="sec-7-1">
        <title>7.1. Evaluation Datasets</title>
        <p>To evaluate system performance, we used multiple datasets covering diferent aspects of textual analysis:
• XNLI-es: Spanish subset of the XNLI corpus for zero-shot classification evaluation
• TASS-2020: Spanish sentiment analysis corpus from Twitter
• CoNLL-2002 NER: Standard dataset for Spanish entity recognition
• MLDoc: Multilingual document classification corpus</p>
      </sec>
      <sec id="sec-7-2">
        <title>7.2. Evaluation Metrics</title>
        <p>We evaluated performance using standard metrics for each task:
• Classification : Precision, Recall, macro and micro F1-score
• Sentiment: Accuracy, weighted F1, MAE for ordinal scores
• NER: F1 per entity, micro and macro F1, precision and recall</p>
      </sec>
      <sec id="sec-7-3">
        <title>7.3. Results</title>
      </sec>
      <sec id="sec-7-4">
        <title>7.4. Computational Performance Analysis</title>
        <p>Computational performance analysis was conducted on an Amazon EC2 m6i.large instance (CPU-only)
and reveals the following characteristics:
• Inference time: 1.2s average per text (500 tokens) on CPU
• Memory usage: 16GB RAM with all three models loaded
• Throughput: 230 texts/minute on EC2 m6i.large instance
• Hardware specifications : Intel Xeon processors, 32GB RAM, no GPU acceleration</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>8. Use Cases and Applications</title>
      <sec id="sec-8-1">
        <title>8.1. Social Media Analysis</title>
        <p>The system has been successfully applied to social media content analysis, where automatic topic
classification, sentiment analysis, and entity extraction provide valuable insights for researchers and
marketing analysts.</p>
      </sec>
      <sec id="sec-8-2">
        <title>8.2. Corporate Document Processing</title>
        <p>In enterprise environments, the system facilitates automatic document classification, relevant
information extraction, and integrated customer feedback analysis.</p>
      </sec>
      <sec id="sec-8-3">
        <title>8.3. Research Tools</title>
        <p>Researchers in social sciences and digital humanities use the system for textual corpus analysis, enabling
automated identification of thematic patterns, sentiment trends, and entities of interest.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>9. Limitations and Future Work</title>
      <sec id="sec-9-1">
        <title>9.1. Current Limitations</title>
        <p>The system presents several limitations that should be considered:
• Computational cost: Simultaneous execution of three models requires significant computational
resources
• Language dependency: Although multilingual, performance varies by specific language
• Domain specificity : Models may require fine-tuning for highly specialized domains
• Latency: Processing time may be prohibitive for ultra-low latency applications</p>
      </sec>
      <sec id="sec-9-2">
        <title>9.2. Future Directions</title>
        <p>Future work will focus on:
1. Model optimization: Implementation of distillation and pruning techniques to reduce size and
accelerate inference
2. Continual learning: Development of mechanisms to update models with new data without
complete retraining
3. Modality integration: System extension to process text, images, and audio in a unified manner
4. Personalization: Development of techniques to adapt the system to specific user needs
10. Conclusions
This work presents an integrated system for multi-dimensional text classification that efectively
combines three specialized Transformer models. Experimental results demonstrate that the ensemble
approach consistently outperforms individual models, providing richer and more reliable analysis of
textual content.</p>
        <p>The main contributions include: (1) a modular architecture for specialized model integration, (2)
performance optimizations for practical applications, and (3) comprehensive evaluation across multiple
datasets and metrics.</p>
        <p>The system demonstrates practical applicability across various domains, from social media analysis to
corporate document processing. Although limitations exist in terms of computational cost and domain
specialization, future research directions ofer promising paths to address these challenges.</p>
        <p>Future research would benefit from exploring model compression techniques, end-to-end multi-task
learning, and multi-modal extensions of the proposed approach.</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>We declare that the present manuscript has been written entirely by the authors and that no generative
artificial intelligence tools were used in its preparation, drafting, or editing.</p>
    </sec>
    <sec id="sec-11">
      <title>A. Online Resources</title>
      <p>The source code and datasets used in this work are available at:
• GitHub Repository
• Hugging Face Models
• Datasets on Zenodo</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Álvarez-Carmona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fajardo-Delgado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Pérez-Espinosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Martínez-Miranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <article-title>Ángel DíazPacheco, Natural language processing applied to tourism research: A systematic review and future research directions</article-title>
          ,
          <source>Journal of King Saud University - Computer and Information Sciences</source>
          <volume>34</volume>
          (
          <year>2022</year>
          )
          <fpage>10125</fpage>
          -
          <lpage>10144</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S1319157822003615. doi:https://doi.org/10.1016/j.jksuci.
          <year>2022</year>
          .
          <volume>10</volume>
          .010.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Olmos-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Díaz-Pacheco</surname>
          </string-name>
          ,
          <article-title>What does the media tell us about a destination? the cancun case, seen from the usa, canada, and mexico</article-title>
          ,
          <source>International Journal of Tourism Cities</source>
          <volume>10</volume>
          (
          <year>2023</year>
          )
          <fpage>639</fpage>
          -
          <lpage>661</lpage>
          . URL: http://dx.doi.org/10.1108/IJTC-09-2022-0223. doi:
          <volume>10</volume>
          .1108/ijtc-09-2022-0223.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Álvarez-Carmona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          , et al.,
          <article-title>Big data analytics of online news to explore destination image using a comprehensive deep-learning approach: a case from mexico</article-title>
          ,
          <source>Information Technology &amp; Tourism</source>
          <volume>26</volume>
          (
          <year>2024</year>
          )
          <fpage>147</fpage>
          -
          <lpage>182</lpage>
          . URL: https://doi.org/10.1007/ s40558-023-00278-5. doi:
          <volume>10</volume>
          .1007/s40558-023-00278-5.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Diaz-Pacheco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Álvarez-Carmona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A. C.</given-names>
            <surname>Chávez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>RodríguezGonzález</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Ramírez-Silva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence methods to support the research of destination image in tourism. a systematic review</article-title>
          ,
          <source>Journal of Experimental &amp; Theoretical Artificial Intelligence</source>
          <volume>0</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>31</lpage>
          . doi:
          <volume>10</volume>
          .1080/0952813X.
          <year>2022</year>
          .
          <volume>2153276</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Á.</given-names>
            <surname>Díaz-Pacheco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-GonzÁlez</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
          </string-name>
          ,
          <article-title>A comprehensive deep learning approach for topic discovering and sentiment analysis of textual information in tourism</article-title>
          ,
          <source>Journal of King Saud University - Computer and Information Sciences</source>
          <volume>35</volume>
          (
          <year>2023</year>
          )
          <article-title>101746</article-title>
          . URL: http://dx.doi.org/10.1016/j.jksuci.
          <year>2023</year>
          .
          <volume>101746</volume>
          . doi:
          <volume>10</volume>
          .1016/j. jksuci.
          <year>2023</year>
          .
          <volume>101746</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Arce-Cárdenas</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fajardo-Delgado</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Guerrero-Rodríguez</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>López-Monroy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Martínez-Miranda</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Pérez-Espinosa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2021: Recommendation system for text mexican tourism</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>67</volume>
          (
          <year>2021</year>
          ). doi:https://doi.org/10.26342/2021-67-14.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>Álvarez-Carmona</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Aranda</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Guerrero-Rodríguez</surname>
            ,
            <given-names>A. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Rodríguez-González</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>LópezMonroy</surname>
          </string-name>
          ,
          <article-title>A combination of sentiment analysis systems for the study of online travel reviews: Many heads are better than one</article-title>
          ,
          <source>Computación y Sistemas</source>
          <volume>26</volume>
          (
          <year>2022</year>
          ). doi:https://doi.org/10. 13053/CyS-26-2-4055.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fajardo-Delgado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guerrero-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2022: Recommendation system, sentiment analysis and covid semaphore prediction for mexican tourist texts</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>69</volume>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Muñis-Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Pastor-López</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sánchez-Vega</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2023: Research on sentiment analysis task for mexican tourist texts</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>71</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <article-title>Álvarez-Carmona, Á</article-title>
          . Díaz-Pacheco,
          <string-name>
            <given-names>R.</given-names>
            <surname>Aranda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Rodríguez-González</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bustio-Martínez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Herrera-Semenets</surname>
          </string-name>
          ,
          <article-title>Overview of rest-mex at iberlef 2025: Researching sentiment evaluation in text for mexican magical towns</article-title>
          , volume
          <volume>75</volume>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>González-Barba</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <string-name>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          ,
          <article-title>Overview of IberLEF 2025: Natural Language Processing Challenges for Spanish and other Iberian Languages, in: Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2025), co-located with the 41st Conference of the Spanish Society for Natural Language Processing (SEPLN 2025), CEUR-WS</article-title>
          . org,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>