<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>D. N. Ba); haodn@uit.edu.vn (D. N. Hao);
ngannlt@uit.edu.vn (N. L. Nguyen)
 https://nlp.uit.edu.vn/ (D. V. Thin); https://nlp.uit.edu.vn/ (N. L. Nguyen)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>ABCD Team at FinancES 2023: An Unified Generative Framework for the Financial Targeted Sentiment Analysis in Spanish</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dang Van Thin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dai Nguyen Ba</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Duong Ngoc Hao</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ngan Luu-Thuy Nguyen</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Information Science and Engineering, University of Information Technology-VNUHCM</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Information Technology-VNUHCM</institution>
          ,
          <addr-line>Quarter 6, Linh Trung Ward, Thu Duc District, Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Vietnam National University</institution>
          ,
          <addr-line>Ho Chi Minh City</addr-line>
          ,
          <country country="VN">Vietnam</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This paper presents our participation in the IBERLEF 2023 Task - FinancES in Spanish, focusing on two sub-tasks: Financial targeted sentiment analysis and Financial Sentiment Analysis at the document level for companies and consumers. To address these sub-tasks, we propose a unified generative framework that leverages strong pre-trained language models capable of simultaneously extracting all relevant elements from financial news headlines. Additionally, we introduce two simple auxiliary tasks designed to provide the model with additional information to distinguish sentiment classes for diferent perspectives. The experimental results validate the efectiveness of our approach, as our participation system achieved a Top 3 ranking in both sub-tasks. Specifically, our best model achieved a result of 78.2175% and 61.0373% in terms of F1-score for Task 1 - identifying the target term and its corresponding sentiment and Task 2 classifying the sentiments for the companies and consumers, respectively.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Financial Targeted Sentiment Analysis</kwd>
        <kwd>Spanish language</kwd>
        <kwd>Unified generative framework</kwd>
        <kwd>sentiment analysis</kwd>
        <kwd>aspect-based sentiment analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The shared-task IBERLEF 2023 Task [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] - FinancES: Financial Targeted Sentiment Analysis [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
in Spanish aims to extract the targeted sentiment analysis in the field of microeconomics. In this
shared task, two sub-tasks were proposed for participants. The first challenge, called the targeted
sentiment task, aims to extract the target entity in the content and identify the sentiment polarity
towards the target. For example, given a new headline, “Acuerdos comerciales, sinónimo de
oportunidades para República Dominicana” the output of this task is “Acuerdos comerciales”
and “positive” for the target and sentiment class, respectively. On the other hand, another task
is called document-level sentiment analysis, which aims to classify the sentiment polarity for
both the company and consumer aspects. Using the same input as above, the output of this task
would be “positive” for both the company and consumer aspects, respectively. The sentiment
class in both sub-tasks is assigned one of the “positive”, “negative”, and “neutral” values.
      </p>
      <p>
        With the power of pre-trained generative language models such as T5 [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and BART [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
many natural language processing tasks have been successfully addressed as text generation
problems, surpassing the performance of traditional approaches, e.g. Named Entity Recognition
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], Aspect-based Sentiment Analysis [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and etc. Therefore, this paper presents a unified
generative framework that integrates two sub-tasks into a cohesive generative formulation.
Subsequently, we refine and optimize the pre-trained sequence-to-sequence language models to
address both sub-tasks within an end-to-end framework efectively. Moreover, we design two
auxiliary tasks to provide the models regarding the relationship between the sentiment classes
of the three sentiment objects.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>The easy accessibility of financial texts has significantly increased with the widespread use of
the Internet and the growing need for market transparency. As a result, the field of financial text
analysis has emerged. While the concept of utilizing textual analysis in the financial markets
is not entirely novel, the influence of sentiment analysis on financial markets has been firmly
established.</p>
      <p>Recently, Yıldırım et al. [7] explored diferent deep Learning approaches for Sentiment
Analysis on the financial dataset by comparing deep learning classifiers to traditional machine
learning approaches. Their findings highlighted the superior performance of LSTM models,
including bidirectional LSTM and LSTM with dropout, and revealed similar success rates among
various optimizers. Another relevant study by Lee et al. (2020)[8] employed the pre-trained
BERT-base model and achieved high accuracy in recognizing investor sentiment after
finetuning on a labelled sentiment dataset. Mishev et al.[9] investigated the evolution of sentiment
analysis methods from lexicons to transformers, emphasizing NLP transformers’ superiority
and ability to capture semantic meaning efectively. In recent years, the implementation of
the RNN-LSTM network by Kohsasih et al. [10] demonstrated that this approach can show
a significant improvement in sentiment analysis task. A study conducted by Ong et al. [ 11]
explored the statistical connection between aspect-based sentiment labels and specific stocks.
The research revealed a distinct correlation, particularly in relation to aspects such as inflation
and the economy, highlighting their significant influence on stock prices.</p>
      <p>
        Unlike previous approaches that rely on classification-based approaches for sentiment analysis
on financial data, in this paper, we convert two challenges in the shared-task [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] as a generative
problem and utilize the power of pre-trained language models combined with two auxiliary
tasks.
      </p>
      <sec id="sec-2-1">
        <title>Task 1:</title>
      </sec>
      <sec id="sec-2-2">
        <title>Target: Acuerdos comerciales</title>
      </sec>
      <sec id="sec-2-3">
        <title>Sentiment: positive</title>
      </sec>
      <sec id="sec-2-4">
        <title>Task 2:</title>
      </sec>
      <sec id="sec-2-5">
        <title>Company Sentiment: positive</title>
      </sec>
      <sec id="sec-2-6">
        <title>Consumers Sentiment: positive</title>
      </sec>
      <sec id="sec-2-7">
        <title>Acuerdos comerciales &lt;e&gt; positive &lt;e&gt; positive &lt;e&gt; positive &lt;e&gt; True &lt;e&gt; True</title>
      </sec>
      <sec id="sec-2-8">
        <title>Acuerdos comerciales &lt;e&gt; positive &lt;e&gt; .... &lt;e&gt; True</title>
        <p>Decoder
Encoder
y
t
u
p
t
u
O</p>
        <p>s
5Tm leodm
x
t
u
n Acuerdos comerciales, sinónimo ..... República
p
I</p>
      </sec>
      <sec id="sec-2-9">
        <title>Dominicana</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Approach</title>
      <p>headline → target [e]ptarget [e]pcompanies [e]pconsumers [e]aux1 [e]aux2
(1)</p>
      <p>
        In this paper, we utilize the pre-trained generative mT5 model [12] with encoder-decoder
architecture for the following reasons: (1) First, the mT5 is the encoder-decoder architecture
which can perform well with most of text generation tasks such as machine translation [13]
and aspect-based sentiment analysis [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Second, the mT5 model is trained on a vast collection
of natural text from 101 languages, sourced from the publicly accessible Common Crawl web
scrape. Notably, Spanish (es) is one of the three languages with a substantial amount of training
data in the variants of the mT5 model [12]. Third, representing the output of two tasks in this
shared task as a natural language sentence becomes a straightforward process while fine-tuning
the mT5 models. Given a new headline x, the encoder component will map it to the sequence
of contextual embedding e. Then, the decoder assists to the output of the encoder layer to
calculate the conditional probability distribution  (y|e) of the target sentence y where  is the
parameter weight. Finally, a softmax layer is applied to the output of the decoder to obtain the
probability distribution for the next token  as:
 (|, 0:− 1) =  (  )
(2)
where W is a matrix to map the prediction  to a vector. In our work, we initialize the  with the
pre-trained parameter weights of mT5 models [12] and fine-tune the parameters to maximize
the log-likelihood  (y|e).
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Setup</title>
      <sec id="sec-4-1">
        <title>4.1. Data and Evaluation Metrics</title>
        <p>We only use the oficial training set [ 14], which is provided by the organizers, to train our
models for two challenges in the shared-task. Table 1 presents the general statistics of the
training and testing set, while Table 2 shows the statistical information towards the polarity
classes for three sentiments tasks. As depicted in Table 2, it is evident that there exists an
imbalance among the sentiment classes, along with distinct distribution variations between
the two sets. These imbalances and distribution variations challenge participating teams when
addressing sentiment-related problems within the datasets.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. System Settings</title>
        <p>We implemented our framework based on the HuggingFace Transformer library [15]. All models
were trained with 20 epochs. We use the learning rate of 1e-3 for the mT5-Small, mT5-based and
1e-4 for mT5-large and mT5-XL. The batch size was set to {32,16} based on the size of pre-trained
language models. The beam width was set to 5. We did not utilize any development data to
tuning models. The AdamW optimizer was selected to optimize our models. The maximum
input and output sequence lengths have been configured as 70 and 48 tokens, respectively. For
our experiments, we set a fixed random seed of 42 to train the models. As the Spanish language
is not familiar to us, we did not apply any pre-processing methods to the entire dataset except
eliminating multiple spaces in the text.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Main results</title>
      <p>The oficial results and the results of the top systems are shown in Table 3. Our best model
achieves the Top 3 ranking in both challenges during the final round. Specifically, for Task 1
- identifying the target term and its corresponding sentiment, our model achieved a result of
78.2175% in terms of F1-score, which is lower than the F1 scores of the Top 1 and Top 2 teams,
which are -1.0069% and -0.9997%, respectively. Regarding Task 2 - classifying the sentiment for
the companies and consumers, our model also demonstrates the competition by achieving an
F1 score of 61.0373%, which is lower than the top 1 team’s model by -3.1976%. For the three
sentiment analysis tasks, we achieved the F1-score of 78.9838% (Top 3), 58.8635% (Top 2) and
63.2111 (Top 3) for the target sentiment, companies sentiment and consumers sentiment tasks,
respectively.</p>
      <p>Table 4 shows the overall results of our submission model and other variants on the test set
on two challenges and three sentiment analysis tasks. Overall, it can see that the performance of
the model is improved when using language models of large size. As mentioned in the previous
work [12], larger models are trained with a larger vocabulary in mT5 variants. Consequently,
these larger models have the ability to capture and represent a wider range of linguistic patterns
and contextual information for the given text input. However, this also poses a challenge
regarding computational resources and model storage for large language models. As seen in
Table 4, the models trained on both the original tasks and two auxiliary tasks exhibit better
performance compared to models solely fine-tuned on the original tasks. This demonstrated
our two auxiliary tasks improved the overall performance.</p>
      <p>Figure 2 presents the confusion matrix for three sentiment analysis tasks in our final
submission. In the sentiment analysis task for target terms, our model performs well in classifying
“positive” and “negative” sentiments but struggles with the “neutral” class. The rate of incorrectly
predicting neutral labels as positive versus negative labels is relatively high, with a ratio of
approximately 30% and 27%, respectively. This result contrasts with the findings from Task 2,
which focused on sentiment analysis for companies and consumers. As shown in Figure 2, our
approach achieves the highest accuracy on classifying neutral labels, while its performance on
positive and negative labels is comparatively lower. A significant percentage of misclassified
positive and negative labels, particularly in the “positive” class, are incorrectly predicted as
neutral labels in both tasks. One of the reasons for this result could be the imbalance between
polarity classes, as shown in Table 2, within the training set.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>In this paper, we describe our submission system for the IBERLEF 2023 Task - FinancES: Financial
Targeted Sentiment Analysis, which achieved the Top 3 in both sub-tasks in the shared task.
Instead of solving two challenges through classification approaches, we present a generative
approach based on the contextual pre-trained language model to solve tasks at the same time.
Besides, we design two simple auxiliary tasks to provide more information between two tasks
simultaneously. Based on our experimental results and analysis, we believe this approach can
be widely applied to other domains for targeted sentiment analysis. Due to the limitations in
computational resources, we cannot explore large language models such as mT5-XXL [12] or
BLOOM [16] models. However, we believe that fine-tuning these models can improve prediction
efectiveness. Additionally, applying efective text pre-processing techniques can also enhance
performance on the test set.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>Dang Van Thin was funded by the Master, PhD Scholarship Programme of Vingroup Innovation
Foundation (VINIF), code VINIF.2022.TS120.
Natural Language Processing (Volume 1: Long Papers), Association for Computational
Linguistics, Online, 2021, pp. 2416–2429. URL: https://aclanthology.org/2021.acl-long.188.
doi:10.18653/v1/2021.acl-long.188.
[7] S. Yıldırım, D. Jothimani, C. Kavaklioğlu, A. Başar, Deep learning approaches for sentiment
analysis on financial microblog dataset, in: 2019 IEEE International Conference on Big
Data (Big Data), IEEE, 2019, pp. 5581–5584.
[8] C.-C. Lee, Z. Gao, C.-L. Tsai, Bert-based stock market sentiment analysis, in: 2020 IEEE
International Conference on Consumer Electronics - Taiwan (ICCE-Taiwan), 2020, pp. 1–2.
doi:10.1109/ICCE-Taiwan49838.2020.9258102.
[9] K. Mishev, A. Gjorgjevikj, I. Vodenska, L. T. Chitkushev, D. Trajanov, Evaluation of
sentiment analysis in finance: From lexicons to transformers, IEEE Access 8 (2020) 131662–
131682. doi:10.1109/ACCESS.2020.3009626.
[10] K. L. Kohsasih, B. H. Hayadi, Robet, C. Juliandy, O. Pribadi, Andi, Sentiment analysis
for financial news using rnn-lstm network, in: 2022 4th International Conference on
Cybernetics and Intelligent System (ICORIS), 2022, pp. 1–6. doi:10.1109/ICORIS56080.
2022.10031595.
[11] K. Ong, W. van der Heever, R. Satapathy, G. Mengaldo, E. Cambria, Finxabsa: Explainable
ifnance through aspect-based sentiment analysis, arXiv preprint arXiv:2303.02563 (2023).
[12] L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, C. Rafel,
mT5: A massively multilingual pre-trained text-to-text transformer, in: Proceedings
of the 2021 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, Association for Computational
Linguistics, Online, 2021, pp. 483–498. URL: https://aclanthology.org/2021.naacl-main.41.
doi:10.18653/v1/2021.naacl-main.41.
[13] O. Agarwal, M. Kale, H. Ge, S. Shakeri, R. Al-Rfou, Machine translation aided bilingual
datato-text generation and semantic parsing, in: Proceedings of the 3rd International Workshop
on Natural Language Generation from the Semantic Web (WebNLG+), Association for
Computational Linguistics, Dublin, Ireland (Virtual), 2020, pp. 125–130. URL: https://
aclanthology.org/2020.webnlg-1.13.
[14] P. Ronghao, J. A. García-Díaz, F. García-Sánchez, R. Valencia-García, Evaluation of
transformer models for financial targeted sentiment analysis in spanish, PeerJ Computer
Science 9 (2023) e1377. URL: https://doi.org/10.7717/peerj-cs.1377. doi:10.7717/peerj-cs.
1377.
[15] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf,
M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao,
S. Gugger, M. Drame, Q. Lhoest, A. Rush, Transformers: State-of-the-art natural
language processing, in: Proceedings of the 2020 Conference on Empirical Methods in
Natural Language Processing: System Demonstrations, Association for Computational
Linguistics, Online, 2020, pp. 38–45. URL: https://aclanthology.org/2020.emnlp-demos.6.
doi:10.18653/v1/2020.emnlp-demos.6.
[16] T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilić, D. Hesslow, R. Castagné, A. S. Luccioni,
F. Yvon, M. Gallé, et al., Bloom: A 176b-parameter open-access multilingual language
model, arXiv preprint arXiv:2211.05100 (2022).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Montes-y Gómez, Overview of IberLEF 2023: Natural Language Processing Challenges for Spanish and other Iberian Languages, in: Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2023), co-located with the 39th Conference of the Spanish Society for Natural Language Processing (SEPLN 2023), CEURWS</article-title>
          .org,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. A.</given-names>
            <surname>García-Díaz</surname>
          </string-name>
          , Almela,
          <string-name>
            <given-names>F.</given-names>
            <surname>García-Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. Alcaráz</given-names>
            <surname>Mármol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Marín-Pérez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Valencia-García</surname>
          </string-name>
          , Overview of FinancES 2023:
          <article-title>Financial Targeted Sentiment Analysis in Spanish</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>71</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Exploring the limits of transfer learning with a unified text-to-text transformer</article-title>
          ,
          <source>The Journal of Machine Learning Research</source>
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <fpage>5485</fpage>
          -
          <lpage>5551</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghazvininejad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          , L. Zettlemoyer, BART:
          <article-title>Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>7871</fpage>
          -
          <lpage>7880</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .acl-main.
          <volume>703</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl-main.
          <volume>703</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <article-title>A unified generative framework for various NER subtasks, in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th</article-title>
          <source>International Joint Conference on Natural Language Processing</source>
          (Volume
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics</source>
          , Online,
          <year>2021</year>
          , pp.
          <fpage>5808</fpage>
          -
          <lpage>5822</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .
          <article-title>acl-long</article-title>
          .
          <volume>451</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .
          <article-title>acl-long</article-title>
          .
          <volume>451</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Z. Zhang,</surname>
          </string-name>
          <article-title>A unified generative framework for aspectbased sentiment analysis</article-title>
          ,
          <source>in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>