<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Comprehensive Framework for Aspect-Category Sentiment Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Loris Di Quilio</string-name>
          <email>loris.diquilio@studenti.unich.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fabio Fioravanti</string-name>
          <email>fabio.fioravanti@unich.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Aspect-Category Sentiment Analysis (ACSA), Aspect-Based Sentiment Analysis (ABSA)</institution>
          ,
          <addr-line>Annotations, Sentiment</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DEc, University of Chieti-Pescara</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this study, we developed an Aspect-Category Sentiment Analysis (ACSA) framework encompassing data conversion, semi-automatic annotation methods using predictions, and the creation of a prediction-based report. We aimed to adapt an Aspect-Category-Opinion Sentiment (ACOS) tool from the literature to the Aspect-Category Sentiment Analysis (ACSA) task. We developed a web application where the dataset released in this paper (beauty dataset) can be annotated manually or semi-automatically and incorporated into the training data to enhance the model. Additionally, we also evaluated our framework using various datasets available in the literature, comparing with a tool that follows a similar approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. The PyACSA tool</title>
      <p>
        Aspect-Based Sentiment Analysis (ABSA) tasks. They difer in that ACOS extracts four elements from
the text (aspect terms, category, opinion terms, and sentiment polarity) [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ], whereas ACSA extracts
CEUR
      </p>
      <p>ceur-ws.org
two elements(category and sentiment polarity). In the table below we report an example that shows
the extraction of all elements from a sentence.</p>
      <sec id="sec-2-1">
        <title>Text</title>
      </sec>
      <sec id="sec-2-2">
        <title>Aspect Term</title>
      </sec>
      <sec id="sec-2-3">
        <title>Category</title>
      </sec>
      <sec id="sec-2-4">
        <title>Opinion Term Sentiment polarity</title>
        <p>‘Though the service might be
a little slow, the waitresses
are very friendly.’
‘service’
‘waitresses’
‘service’
‘staf’
‘a little slow’
‘very friendly’
‘negative’
‘positive’</p>
        <p>
          As mentioned in the introduction, we developed the PyACSA tool by specializing a new feature of
PyABSA [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], which was designed for the more complex task of Aspect Category Opinion Sentiment, to
the Aspect-Category Sentiment Analysis. It is important to highlight the fact that the task is carried
with the same format as SemEval 2016 task 5, subtask 2 [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], which means at text-level. At text-level,
given a customer review about a target entity, the goal is to identify a set of {category, polarity} pairs
that summarize the opinions expressed in the review. The polarities can be “positive”, “negative”,
“neutral” (when a category is mentioned without any sentiment), or “conflict” (when the same category
is expressed in a positive and a negative inside the same text, but neither of the two is dominant).
        </p>
        <p>
          The PyACSA tool uses the T5 (Text-to-Text Transfer Transformer) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] model which is based on
a standard encoder-decoder Transformer [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] capturing long-range dependencies in text. The T5
model, if trained on a vast corpus of text data, converts various natural language processing (NLP)
tasks into a text-to-text format, achieving state-of-the-art performance on many benchmarks covering
summarization, question answering, text classification, and more. The pre-trained model used in this
work is flan-t5-xl [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], an extension of the original T5 model with improved performance.
        </p>
        <p>
          The PyABSA tool, which is implemented in PyTorch [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], is built using this pre-trained model and
ifne-tuned on the dataset using several instructions, one for each element. We developed PyACSA
by modifying the PyABSA code so that only one extraction is performed containing both categories
and sentiment polarities. This method takes an input text, categories, and polarities, and returns a
string that combines the instructions with the provided input. This formatted string is then fed to the
model for training or prediction. A specific module facilitates the creation of a dataset for this task by
preparing the data in the required format, creating the training and test datasets, and reading JSON data
from a file. With this approach, we aim to simplify the implementation of the ACSA framework that
exploits several utilities to facilitate this task. The tool’s integration with PyABSA provides a robust and
lfexible system for our specific needs, allowing us to streamline the data preparation and model training
processes exploring the application of a text-to-text model in a task where it is not typically used.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. ACSA Utilities</title>
      <p>In this section, we present the framework we developed for using and evaluating the PyACSA tool.
We developed a web application in Python using the Flask package to promote the use of this task,
facilitate user interaction with the model, and make the entire process more accessible and eficient.
The framework contains key components such as data format transformation modules, for converting
various input formats into the JSON format required by PyACSA), and utilities for manual and
semiautomatic data annotation, allowing users to annotate data directly within the web app or use the
model’s predictions to assist in the annotation process. This dual approach enhances flexibility and
eficiency, catering to diferent user needs and preferences, while leveraging the model for annotations
improves accuracy and accelerates the workflow.</p>
      <p>Furthermore, the framework contains a module that generates a bar chart along with a
sentiment index (see Figure 3), which evaluates the categories of the reviews entered into the
system. This feature provides valuable insights into the sentiment distribution across diferent
aspects of our domain, aiding in the analysis and interpretation of the data. We release our code at
https://github.com/lorisdiquilio/ACSA-Framework-using-T2T-model.</p>
      <p>In the following paragraphs, we describe the components of the framework in detail, highlighting
their functionalities and expected benefits.</p>
      <sec id="sec-3-1">
        <title>3.1. Data Converter</title>
        <p>The data converter module provides various converters designed to facilitate the transformation of data
across various formats, ensuring compatibility and ease of use for subsequent tasks. The module is
composed of several functions written in Python to transform data across tasks, sub-tasks, and formats.
Among the main converters we developed, there are those for conversion of data from SemEval 14, 15
and 16 (XML) format to the JSON format of PyACSA, which have been used to evaluate PyACSA on the
SemEval datasets.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Data Annotation</title>
        <sec id="sec-3-2-1">
          <title>3.2.1. Manual annotation</title>
          <p>This module allows the user to annotate the dataset in the JSON format used by our tool (shown in
Listing 1). Data can be annotated both manually and in a semi-automatic way.</p>
          <p>This module allows to load a training file, in JSON format, that contains annotated text. After loading
the file, it is possible to annotate each review, by adding new annotations selecting the categories and
polarities, or by deleting annotations, if needed. Additionally, the module allows to add categories that
are not already known to the system, ofering flexibility and adaptability to various annotation needs.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Semi-automatic annotation</title>
          <p>This module is designed to leverage the initial trained model for further improvements. Once a trained
model is available, through this module we can use the model itself to suggest annotations on new data,
making the dataset creation process faster and more eficient.</p>
          <p>Similar to the manual annotation module, upon accessing this tab, the reviews and model predictions
will be displayed, with the initial prediction next to the text. Multiple predictions for the same sentence
will appear below the text. After reviewing the model’s predictions, necessary corrections can be made
and saved back into the original training file (JSON). This feature was used to annotate more data and
improve model performance during experiments.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Report Generation</title>
        <p>This module can be used to generate a report providing a final assessment of the reviews for a product.</p>
        <p>We compute a sentiment index, which measures the overall quality of the product through all its
reviews within the relevant domain. In this case, the domain represents a Skin Care, Body Care and
Hair Care products. The sentiment index is computed based on the aggregated sentiment scores across
diferent categories, ofering a comprehensive evaluation of the product’s performance. The sentiment
index is computed as follows:</p>
        <p>Sentiment Index =</p>
        <p>Positive − Negative
Positive + Negative
(1)
where Positive and Negative denote the number of positive and negative reviews, respectively. Note
that the sentiment index value ranges from −1 to 1.</p>
        <p>In Figure 3, we show the bar chart generated by the module, which displays the review polarities
for each category of the selected product. This bar chart provides a visual representation of how users
perceive diferent aspects of the product, with each bar indicating the level of positive or negative
sentiment associated with a specific category. Figure 4 presents the overall sentiment index and the
sentiment index calculated for each category. The overall sentiment index gives a comprehensive view
of the general perception of the product, while the category-specific sentiment indices allow us to delve
deeper into particular aspects. This dual representation helps in understanding not only the general
acceptance of the product but also the specific areas where it excels or falls short.</p>
        <p>From the results, we can evaluate the aspects that perform positively and negatively for this specific
product. For instance, we can conclude that this product is generally well-received because it is efective,
has a pleasant texture and smell, and is conveniently sized for travel. However, there are some drawbacks
noted by users, such as the small quantity (for some), poor delivery and packaging, and the high cost of
the product.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental evaluation</title>
      <p>We have built an ACSA dataset based on Beauty and Personal Care reviews. The dataset was annotated
manually and semi-automatically by one of the authors and is available at the repository.</p>
      <p>In addition to the new dataset, we used some publicly available datasets4: SemEval Laptop and
Restaurant, and MAMS (Multi-Aspect Multi-Sentiment). We used the Data conversion module of our
framework to convert datasets in XML format to the JSON format used by PyACSA. In Table 2 we show
some statistics about the datasets.</p>
      <sec id="sec-4-1">
        <title>Beauty</title>
        <p>MAMS
Rest 16</p>
        <p>Laptop
# Train
# Test
# Categories
# Positive annotations
# Negative annotations
# Neutral annotations
# Conflict annotations</p>
        <sec id="sec-4-1-1">
          <title>4.1. Experimental settings and results</title>
          <p>In the experiment, the following settings are used: pre-trained flan-t5-xl with 3 billion parameters,
learning rate of 5 × 10−5. Epochs and batch size are set to 10 and 6, respectively. A regularization
parameter ( 2 Regularization)5 that helps prevent overfitting by reducing model weights during training
is set to 0.01. The Warmup-Ratio (the ratio of total training steps used for a linear warmup from 0 to
learning_rate) is set to 0.1.</p>
          <p>
            The PyACSA tool is compared with ACSA-Gen [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ], one of the state-of-the-art tools in this field, as
stated in the survey [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], which also leverages the power of a pre-trained generative model. Note that
4https://github.com/l294265421/ACSA/tree/master/datasets
5https://paperswithcode.com/method/weight-decay
the model used by PyACSA is more powerful than that used by ACSA-Gen, which is a
BART-largeMNLI6, the best BART model for the classification task. The latter was used with the template “The
sentiment polarity of &lt;given_category&gt; is &lt;polarity_type&gt;” for each label. The configurations for
BART-large-MNLI are: learning rate of 4 × 10−5, 15 epochs, and batch size of 16.
          </p>
          <p>The results reported in the table are based on the most common metrics: Precision, Recall, and
Micro-F1 Score.</p>
          <p>Tool</p>
          <p>Metrics
ACSA-Gen</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>PyACSA</title>
        <p>P
R
F1
P
R
F1</p>
        <p>The results show that PyACSA performs better than ACSA-Gen in all the considered datasets, likely
due to the power of the pre-trained model, as the BART model has a smaller size (approximately 406
million parameters) compared to the T5-XL (3 billion). We notice that PyACSA performs quite well in
the ACSA task at the text level, except on the Laptop dataset, probably because of the high number of
categories it contains.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and future works</title>
      <p>We adapted an existing tool from the literature to transition from an
Aspect-Category-OpinionSentiment (ACOS) task to an Aspect-Category Sentiment Analysis (ACSA) task. We developed a
comprehensive web application featuring various sections, including the transformation of diferent
data formats for various Aspect-Based Sentiment Analysis tasks, using the model for data annotation,
and creating a sentiment index to evaluate what are the performances in terms of topics in the reviews
analyzed, entered into the tool. Additionally, we released a new dataset, which will be made available
in the literature for future research in this domain, and we evaluate the tool by comparing it with a
state-of-the-art tool. Our primary goal is to promote the use of this task by simplifying the entire
underlying process, thereby facilitating broader adoption and application in the research community.
However, there are some limitations to our current approach. One significant challenge is that loading
the model is resource-intensive and requires a dedicated space for that. Additionally, the interface is
currently tailored to the specific domain mentioned in the paper (Beauty dataset), and future work
should aim to expand its applicability across several domains to ensure broader usability. While the web
application is continuously improving, future eforts will focus on implementing new features, including
sections for model uploads. For future work, we want to evaluate the web app interface developed in this
study with human participants to ensure that the interface is intuitive and user-friendly, highlighting
possible areas of improvement.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bonetta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Hromei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Siciliani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Stranisci</surname>
          </string-name>
          , Preface to the
          <source>Eighth Workshop on Natural Language for Artificial Intelligence (NL4AI)</source>
          ,
          <source>in: Proceedings of the Eighth Workshop on Natural Language for Artificial Intelligence (NL4AI</source>
          <year>2024</year>
          )
          <article-title>co-located with 23th International Conference of the Italian Association for Artificial Intelligence (AI*IA</article-title>
          <year>2024</year>
          ),
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ping</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Zhang,</surname>
          </string-name>
          <article-title>Aspect category sentiment analysis based on prompt-based learning with attention mechanism</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>565</volume>
          (
          <year>2024</year>
          )
          <fpage>126994</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <article-title>An improved aspect-category sentiment analysis model for text sentiment analysis based on roberta</article-title>
          ,
          <source>Appl. Intell</source>
          .
          <volume>51</volume>
          (
          <year>2021</year>
          )
          <fpage>3522</fpage>
          -
          <lpage>3533</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>PyABSA</surname>
          </string-name>
          ,
          <year>2023</year>
          . URL: https://github.com/yangheng95/PyABSA.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Teng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cui</surname>
          </string-name>
          , H. Liu,
          <string-name>
            <surname>Y. Zhang,</surname>
          </string-name>
          <article-title>Solving aspect category sentiment analysis as a text generation task</article-title>
          ,
          <source>in: EMNLP (1)</source>
          ,
          <source>Association for Computational Linguistics</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4406</fpage>
          -
          <lpage>4416</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lam</surname>
          </string-name>
          ,
          <article-title>A survey on aspect-based sentiment analysis: Tasks, methods, and challenges</article-title>
          ,
          <source>CoRR abs/2203</source>
          .01054 (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.2203.01054.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Quilio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Fioravanti</surname>
          </string-name>
          ,
          <article-title>Evaluating the aspect-category-opinion-sentiment analysis task on a custom dataset (short paper)</article-title>
          ,
          <source>in: NL4AI@AI*IA</source>
          , volume
          <volume>3551</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8] SemEval, Semeval-2016
          <source>task 5</source>
          ,
          <year>2016</year>
          . URL: https://alt.qcri.org/semeval2016/task5/.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Exploring the limits of transfer learning with a unified text-to-text transformer</article-title>
          ,
          <source>J. Mach. Learn. Res</source>
          .
          <volume>21</volume>
          (
          <year>2020</year>
          )
          <volume>140</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>140</lpage>
          :
          <fpage>67</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          ,
          <article-title>Attention is all you need</article-title>
          ,
          <source>in: NIPS</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>5998</fpage>
          -
          <lpage>6008</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H. W.</given-names>
            <surname>Chung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Longpre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Fedus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dehghani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brahma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Webson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Suzgun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chowdhery</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Mishra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Petrov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <article-title>Scaling instruction-finetuned language models</article-title>
          ,
          <source>CoRR abs/2210</source>
          .11416 (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Paszke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Massa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lerer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bradbury</surname>
          </string-name>
          , G. Chanan,
          <string-name>
            <given-names>T.</given-names>
            <surname>Killeen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gimelshein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Antiga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Desmaison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Köpf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>DeVito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Raison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tejani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chilamkurthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Steiner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chintala</surname>
          </string-name>
          ,
          <string-name>
            <surname>Pytorch:</surname>
          </string-name>
          <article-title>An imperative style, high-performance deep learning library</article-title>
          ,
          <source>in: NeurIPS</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>8024</fpage>
          -
          <lpage>8035</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>