<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ABCD Team at ASQP-PT 2025: Aspect Sentiment Quad Prediction in Portuguese as Text Generation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bui Hong Son</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dang Van Thin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Information Technology-VNUHCM</institution>
          ,
          <addr-line>Quarter 6, Linh Trung Ward, Thu Duc District, Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vietnam National University</institution>
          ,
          <addr-line>Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>This paper presents our system for Aspect Sentiment Quad Prediction (ASQP) shared task in Portuguese hotel reviews. Our system leverages text generation-based language models to extract quadruples consisting of aspect categories, aspect terms, sentiment terms, and polarities from customer reviews. Our methodology focuses on a structured text generation paradigm that encodes the relationship between aspects and sentiments through a custom output format. Experimental results on the ABSAPT-2025 shared task dataset demonstrate the eficacy of our approach in handling the complexities of sentiment analysis in low-resource languages. The proposed model achieves competitive performance compared to the baseline and obtains an F1-score of 45.66% (rank 1) in the ASQP-PT 2025 [1, 2] shared task.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Sentiment Analysis</kwd>
        <kwd>Portuguese language</kwd>
        <kwd>sentiment analysis</kwd>
        <kwd>aspect-based sentiment analysis</kwd>
        <kwd>aspect sentiment quad prediction</kwd>
        <kwd>sequence-to-sequence</kwd>
        <kwd>large-language-modelss</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        ABSA has evolved from simple sentiment classification to increasingly fine-grained analysis. Early
approaches such as those presented in SemEval tasks [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3, 4, 5</xref>
        ] addressed ABSA as separate subtasks:
aspect extraction, aspect categorization, and sentiment classification. These approaches typically
employed pipeline architectures where errors propagated through sequential components. More recent
work has moved toward joint modeling of ABSA subtasks. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] introduced the ASQP task, which unifies
all ABSA subtasks into a single prediction challenge. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] proposed a unified generative framework that
      </p>
      <p>Json data</p>
      <p>Preprocessing</p>
      <p>Contextual data
Test Sample 1
Test Sample 2
...</p>
      <p>Test Sample n
test data</p>
      <p>Evaluate</p>
      <p>Trained LLM
"{review}"="Furtaram a minha mala no hotel com
todas as minhas coisas dentro e o hotel não se
responsabilizou por nada. Nem desconto na tarifa
eu tive."</p>
      <p>FormatInput: """Please extract the list of
categories, aspects, sentiments, and
polaritiesin the following review:
{review}"""
Output:"""general is negative because hotel
is Furtaram a minha mala [ssep] general is
negative because hotel is não se
responsabilizou</p>
      <p>Sample 1
Sample 2
...</p>
      <p>Sample n</p>
      <p>
        train data
trainitnugni/ngfineconverts all ABSA subtasks into text generation problems. Similarly, [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] demonstrated the efectiveness
of sequence-to-sequence approaches for aspect sentiment triplet extraction.
      </p>
      <p>
        Multilingual approaches to sentiment analysis have gained popularity with the advent of
crosslingual pre-trained language models. Pretrained models such as mBERT [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], XLM-R [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], and mT5 [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
have demonstrated remarkable cross-lingual transfer capabilities. For Portuguese specifically, prior
work has been limited. The ABSAPT-2022 and ABSAPT-2024 shared tasks [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ] focused on aspect
term extraction and sentiment orientation but did not address the complete ASQP challenge. To our
knowledge, the ABSAPT-2025 task represents the first benchmark for complete ASQP in Portuguese.
      </p>
      <p>
        Recent advances in NLP have demonstrated the efectiveness of framing structured prediction tasks
as sequence-to-sequence problems. [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] showed that T5 models can efectively handle diverse NLP
tasks through text-to-text transfer. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] introduced BART, which achieved state-of-the-art performance
on several generation tasks. For structured prediction specifically, [ 16] demonstrated that
sequenceto-sequence models can efectively generate outputs with complex structure when provided with
appropriate output templates. This approach has been successfully applied to information extraction
tasks [17] and semantic parsing [18].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>As illustrated in Figure 1, our approach implements an end-to-end sequence-to-sequence workflow for
Aspect Sentiment Quad Prediction. The process begins with raw JSON data containing Portuguese
hotel reviews and their annotations, which undergoes preprocessing to normalize text and structure the
training examples. We transform this data into contextual examples where each input-output pair follows
a specific pattern. As shown in the workflow diagram, this creates paired training samples where the
model learns to generate structured outputs from unstructured review text. The example demonstrates
how the Portuguese review "Furtaram a minha mala no hotel com todas as minhas coisas dentro
e o hotel não se responsabilizou por nada. Nem desconto na tarifa eu tive." is transformed into
the structured output "general is negative because hotel is não se responsabilizou," capturing the
aspect category, polarity, aspect term, and sentiment term in a single coherent structure.</p>
      <p>The processed data is used to fine-tune a pre-trained language model (mT5), which is then evaluated
against test samples following the same format. This unified approach ofers several advantages over
pipeline methods, particularly in capturing the interdependencies between aspect categories, terms,
sentiments, and polarities.</p>
      <sec id="sec-3-1">
        <title>3.1. Problem Formulation</title>
        <p>We conceptualize Aspect Sentiment Quad Prediction (ASQP) as a structured sequence transduction
problem. Given an input review text  = {1, 2, ..., } consisting of  tokens, our objective is to
generate a set of quadruples  = {1, 2, ..., }, where each quadruple  = (, , , ) comprises
an aspect category  ∈ , an aspect term  (a contiguous span in ), a sentiment term  (another
span in ), and a sentiment polarity  ∈ { ,  ,   }.</p>
        <p>The fundamental challenge lies in capturing the complex interdependencies among these four
elements while maintaining computational eficiency. Rather than decomposing ASQP into subtasks—as
has been common in prior ABSA research—we adopt a holistic generative approach that leverages the
semantic understanding capabilities of large pre-trained language models.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Model Architecture</title>
        <sec id="sec-3-2-1">
          <title>3.2.1. Portuguese T5</title>
          <p>The T5-portuguese model [19] represents a language-specific adaptation of the original T5 architecture,
specifically pre-trained on a large corpus of Portuguese text. Unlike multilingual variants, this model
concentrates its entire parameter capacity on a single language, potentially ofering more nuanced
linguistic representations for Portuguese-specific phenomena. The model follows the encoder-decoder
architecture of the original T5:
• Encoder: Transforms the input review text into contextualized representations
• Decoder: Autoregressively generates the structured output containing sentiment quadruples
The T5-portuguese model benefits from:
• Focused pre-training on Portuguese linguistic patterns
• Enhanced handling of Portuguese morphology and syntax
• Better representation of Portuguese-specific semantic nuances
• Domain adaptation to Portuguese web content</p>
          <p>This language-specific pre-training theoretically provides advantages in capturing the subtleties
of sentiment expression in Portuguese, which often employs complex verbal constructions and rich
adjectival morphology that difer significantly from other Romance languages.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Multilingual T5</title>
          <p>In parallel, we evaluate the multilingual T5 model [20], specifically the base variant containing
approximately 580M parameters. The mT5 architecture follows the same encoder-decoder paradigm but has
been pre-trained on mC4, a massive multilingual corpus covering 101 languages including Portuguese.</p>
          <p>The decision to include mT5 is motivated by several factors:
• Its substantial exposure to Portuguese data during pre-training
• Its proven cross-lingual transfer capabilities, allowing it to leverage patterns learned from
highresource languages
• The potential for more robust representations through cross-lingual knowledge sharing
We hypothesize that while mT5 allocates only a fraction of its parameter capacity to Portuguese, the
cross-lingual transfer capabilities may compensate by adapting knowledge from related high-resource
languages like Spanish and French.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Input and Output Formulation</title>
        <p>For input encoding, we prepend a task-specific prompt to each review:
Please extract the list of categories, aspects, sentiments and polarities in the
following comment: "{review_text}".</p>
        <p>For output encoding, we design a structured format that explicitly captures the relationships between
elements in each quadruple:
{category} is {polarity} because {aspect} is {sentiment} [ssep] ...</p>
        <p>Where [ssep] is a special separator token used to delineate multiple quadruples. This format ofers
several advantages:
1. It expresses the logical relationship between aspects and sentiments
2. It maintains a consistent structure that the model can learn to reproduce
3. It allows for variable numbers of quadruples per review
4. It reduces the need for complex output parsing mechanisms
This formulation transforms ASQP into a direct text generation problem, allowing us to leverage the
language model’s implicit knowledge without introducing task-specific architecture modifications</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Model Training</title>
        <p>For model optimization, we employ a cross-entropy loss function at the token level:
| |
ℒ = − ∑︁ log  (|&lt;, ; )</p>
        <p>=1
where  represents the target output sequence,  represents the input sequence, and  represents
the model parameters.</p>
        <p>We fine-tune all parameters of the pre-trained mT5 model using the AdamW optimizer with a learning
rate of 3 × 10 −4 and a linear warmup schedule over 10 percent of training steps. This approach allows
the model to adapt its pre-trained representations to the specific linguistic patterns of Portuguese
sentiment expression.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Settings</title>
      <sec id="sec-4-1">
        <title>4.1. Dataset</title>
        <p>We conduct experiments on the ABSAPT-2025 dataset, which consists of hotel reviews written in
Portuguese. The dataset contains annotations for aspect categories, aspect terms, sentiment terms,
and polarities. Each review may contain multiple aspect-sentiment quadruples, with an average of 2.7
quadruples per review.</p>
        <p>The reviews cover various aspects of hotel accommodations, including service, cleanliness, location,
and value. The dataset is particularly challenging due to:
• The linguistic complexity of Portuguese, with its rich morphology
• The domain-specific terminology related to hospitality
• The implicit sentiment expressions common in review language
• The variable length and structure of user-generated content</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Team</title>
      <p>ABCD
Baseline</p>
      <sec id="sec-5-1">
        <title>4.2. Implementation Details</title>
        <p>We implement our approach using the Hugging Face Transformers library. The model
configuration is based on the google/mt5-base architecture, which contains approximately 580 million
parameters. Both input and output sequences are limited to a maximum length of 512 tokens. We train
the model using a batch size of 8 for 10 epochs. Optimization is performed using the AdamW optimizer
with a weight decay of 0.01, and a learning rate of 3 × 10 −4 , incorporating 10% warmup steps. To
improve memory eficiency, we use mixed precision training with the bfloat16 format. All training is
conducted on an NVIDIA GPU with CUDA support. Additionally, dropout is applied at a rate of 0.1 for
regularization, and gradient clipping with a maximum norm of 1.0 is employed to prevent exploding
gradients.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.3. Inference and Post-processing</title>
        <p>At inference time, we employ beam search decoding with beam width 3, temperature 0.1, and top-p
sampling with p=0.9. This configuration balances between deterministic output (necessary for consistent
structured prediction) and diversity (useful for capturing varied expressions).</p>
        <p>The model’s generated text is post-processed using a custom parser that:
1. Splits the sequence on the [ssep] token to identify individual quadruples
2. Extracts the category, polarity, aspect, and sentiment components using regular expressions
3. Locates the exact span positions of aspect and sentiment terms in the original text
4. Handles special cases such as implicit sentiments (marked as “NULL”) and generic aspect
references</p>
        <p>The resulting structured data is converted to the competition’s required JSON format, including
position information for aspects and sentiment terms.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Results &amp; Discussions</title>
      <p>Tables 1 and Table 2 present the performance of our system with diferent backbone architectures for the
quadrupled extraction task. Table 1 shows our internal experimental results on a validation split derived
from the training data, allowing us to compare diferent model configurations and training strategies.
Table 2 presents the oficial competition results as evaluated by the organizers on the held-out test set,
demonstrating our system’s performance in the competitive context.</p>
      <p>As shown in Table 1, our internal validation experiments explored various configurations of model
architecture and training strategies. We evaluated both T5-portuguese and mT5 models across diferent
instruction prompting approaches and data augmentation techniques. For instruction prompting,
English instructions consistently outperformed both no instructions and Portuguese instructions across
both model variants. The T5-portuguese model achieved its highest F1-score of 0.52 with English
instructions, matching the best performance of the mT5. This counterintuitive finding—that English
instructions work better than Portuguese instructions for a Portuguese language task—suggests that
the models’ pre-training regimes may have better prepared them to follow English instructions. Our
data augmentation experiments showed that augmentation strategies preserving the structural
characteristics of the source data (“With source”) outperformed less constrained augmentation methods
("Without source"). However, neither augmentation approach improved upon the best non-augmented
configuration, indicating that the quality of augmented data requires further refinement to provide
meaningful benefits.</p>
      <p>Table 2 reveals that our submission (Team ABCD) using the method with main configurations
including English intruction prompt designs with T5-portuguese model achieved the top ranking in
the oficial ASQP task evaluation, with an F1-score of 0.4566 compared to the baseline’s 0.4055. The
performance gap is particularly pronounced in recall (0.4706 vs. 0.3678), suggesting our approach’s
superior ability to identify relevant quadruples across diverse review contexts.</p>
      <p>The diference between our internal validation scores (up to 0.52 F1) and the oficial competition score
(0.4566 F1) highlights the challenges of domain adaptation and the potential presence of distributional
shifts between training and test data. This phenomenon is common in shared tasks and underscores
the importance of robust evaluation practices that account for potential overfitting to validation data.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Conclusion</title>
      <p>In this paper, we present our system, which leverages the power of sequence-to-sequence modelling
with pre-trained language models to extract quadruples consisting of aspect categories, aspect terms,
sentiment terms, and polarities from customer reviews. Experimental results on the ABSAPT-2025
shared task dataset demonstrate the eficacy of our approach in addressing the complexities of sentiment
analysis in low-resource languages. The proposed model achieves competitive performance without
requiring complex architectural modifications or parameter-eficient fine-tuning techniques, establishing
a strong baseline for future research in multilingual ASQP. For future work, we plan to fine-tune large
language models to enhance overall performance and explore data augmentation techniques to expand
the training set.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgements</title>
      <p>This research is funded by Vietnam National University HoChiMinh City (VNU-HCM) under grant
number C2024-26-02.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, we used ChatGPT and Grammarly to check grammar and spelling
and edit the content for clarity and coherence. After using these tools, we reviewed and edited the
content as needed and took full responsibility for the publication’s content.
Computational Linguistics, 2020, pp. 7871–7880.
[16] G. Paolini, B. Athiwaratkun, J. Krone, J. Ma, A. Achille, R. ANUBHAI, C. N. dos Santos, B. Xiang,
S. Soatto, Structured prediction as translation between augmented natural languages, in:
International Conference on Learning Representations, 2021. URL: https://openreview.net/forum?id=
US-TP-xnXI.
[17] Y. Hao, S. He, W. Jiao, Z. Tu, M. Lyu, X. Wang, Multi-task learning with shared encoder for
non-autoregressive machine translation, arXiv preprint arXiv:2010.12868 (2020).
[18] R. Shin, C. Lin, S. Thomson, C. Chen, S. Roy, E. A. Platanios, A. Pauls, D. Klein, J. Eisner,
B. Van Durme, Constrained language models yield few-shot semantic parsers, in:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, pp.
7699–7715.
[19] D. Carmo, M. Piau, I. Campiotti, R. Nogueira, R. Lotufo, Ptt5: Pretraining and validating the t5
model on brazilian portuguese data, arXiv preprint arXiv:2008.09144 (2020).
[20] L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, C. Rafel, mt5: A
massively multilingual pre-trained text-to-text transformer, 2021. URL: https://arxiv.org/abs/2010.
11934. arXiv:2010.11934.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>González-Barba</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Chiruzzo</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <string-name>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          ,
          <article-title>Overview of IberLEF 2025: Natural Language Processing Challenges for Spanish and other Iberian Languages, in: Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2025), co-located with the 41st Conference of the Spanish Society for Natural Language Processing (SEPLN 2025), CEUR-WS</article-title>
          . org,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E. P.</given-names>
            <surname>Lopes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Bender</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Araújo</surname>
          </string-name>
          , L. A. de Freitas, U. B.
          <string-name>
            <surname>Corrêa</surname>
          </string-name>
          ,
          <article-title>Overview of asqp-pt at iberlef 2025: Overview of the task on aspect-sentiment quadruple prediction in portuguese</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>75</volume>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pontiki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Galanis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pavlopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Papageorgiou</surname>
          </string-name>
          , I. Androutsopoulos, S. Manandhar, SemEval
          <article-title>-2014 task 4: Aspect based sentiment analysis</article-title>
          ,
          <source>in: Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval</source>
          <year>2014</year>
          ),
          <year>2014</year>
          , pp.
          <fpage>27</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pontiki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Galanis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Papageorgiou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Manandhar</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Androutsopoulos</surname>
          </string-name>
          , SemEval-2015 task 12:
          <article-title>Aspect based sentiment analysis</article-title>
          , in: P. Nakov,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zesch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cer</surname>
          </string-name>
          , D. Jurgens (Eds.),
          <source>Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval</source>
          <year>2015</year>
          ),
          <year>2015</year>
          , pp.
          <fpage>486</fpage>
          -
          <lpage>495</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pontiki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Galanis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Papageorgiou</surname>
          </string-name>
          , I. Androutsopoulos,
          <string-name>
            <given-names>S.</given-names>
            <surname>Manandhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>AL-Smadi</surname>
          </string-name>
          , M. AlAyyoub,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. De Clercq</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Hoste</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Apidianaki</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Tannier</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Loukachevitch</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Kotelnikov</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Bel</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <string-name>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          , G. Eryiğit, SemEval
          <article-title>-2016 task 5: Aspect based sentiment analysis</article-title>
          ,
          <source>in: Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval2016)</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Aspect sentiment classification with aspect-specific opinion spans</article-title>
          , in: B.
          <string-name>
            <surname>Webber</surname>
            , T. Cohn,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
          </string-name>
          , Y. Liu (Eds.),
          <source>Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>3561</fpage>
          -
          <lpage>3567</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bing</surname>
          </string-name>
          , W. Lam,
          <article-title>Aspect sentiment quad prediction as paraphrase generation</article-title>
          , in: M.
          <article-title>-</article-title>
          <string-name>
            <surname>F. Moens</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Specia</surname>
          </string-name>
          , S. W.-t. Yih (Eds.),
          <source>Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>9209</fpage>
          -
          <lpage>9219</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>I.</given-names>
            <surname>Naglik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lango</surname>
          </string-name>
          ,
          <article-title>Aste transformer modelling dependencies in aspect-sentiment triplet extraction</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2409.15202. arXiv:
          <volume>2409</volume>
          .
          <fpage>15202</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          , in: J.
          <string-name>
            <surname>Burstein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Doran</surname>
          </string-name>
          , T. Solorio (Eds.),
          <source>Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Conneau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Khandelwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Wenzek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Guzmán</surname>
          </string-name>
          , E. Grave,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>Unsupervised cross-lingual representation learning at scale</article-title>
          ,
          <source>in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>8440</fpage>
          -
          <lpage>8451</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Constant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Al-Rfou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Siddhant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barua</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Rafel, mT5: A massively multilingual pre-trained text-to-text transformer</article-title>
          ,
          <source>in: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>483</fpage>
          -
          <lpage>498</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F. L.</given-names>
            da
            <surname>Silva</surname>
          </string-name>
          , G. d. S. Xavier,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Mensenburg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. P. dos Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Araújo</surname>
          </string-name>
          , U. B.
          <string-name>
            <surname>Corrêa</surname>
          </string-name>
          , L. A. de Freitas,
          <article-title>Absapt 2022 at iberlef: Overview of the task on aspect-based sentiment analysis in portuguese</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>69</volume>
          (
          <year>2022</year>
          )
          <fpage>199</fpage>
          -
          <lpage>205</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Bender</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. P.</given-names>
            <surname>Lopes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Araujo</surname>
          </string-name>
          , L. A. de Freitas, U. B.
          <string-name>
            <surname>Corrêa</surname>
          </string-name>
          , Overview of absapt at iberlef 2024:
          <article-title>Overview of the task on aspect-based sentiment analysis in portuguese</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>73</volume>
          (
          <year>2024</year>
          )
          <fpage>315</fpage>
          -
          <lpage>322</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Exploring the limits of transfer learning with a unified text-to-text transformer</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>67</lpage>
          . URL: http://jmlr.org/papers/v21/
          <fpage>20</fpage>
          -
          <lpage>074</lpage>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghazvininejad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          , L. Zettlemoyer, BART:
          <article-title>Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension</article-title>
          ,
          <source>in: Proceedings of the 58th Annual Meeting of the Association for</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>