<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. Ngo); https://github.com/honghanhh (H. T. H. Tran)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Zootopi at HOPE2023@IberLEF: Is Zero-Shot ChatGPT the Future of Hope Speech Detection?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anh Ngo</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hanh Thi Hong Tran</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Jožef Stefan Institute</institution>
          ,
          <country country="SI">Slovenia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Jožef Stefan International Postgraduate School</institution>
          ,
          <country country="SI">Slovenia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of La Rochelle</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Wroclaw University of Science and Technology</institution>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Hope Speech Detection is a Natural Language Processing (NLP) task where we aim to detect any message or text that can relax a hostile environment and inspire people in a time of sufering illness, stress, loneliness or depression with optimism. In this paper, we proposed two diferent mechanisms to identify whether the text contains Hope Speech or not: (1) We consider the problem as a sequence classification task and examine whether the language model improves the performance when additional training data from another language is added and its capacity to apply knowledge in a language to another unseen one with XLMR; (2) We evaluate ChatGPT's performance and examine how significant the language and prompting model's prediction afected by certain sensitive words, which could potentially lead to data poisoning. The experiments are conducted on two datasets from diferent social networks with informal languages proposed by the IberLEF 2023 Task [1], including Spanish tweets and English Youtube comments, and the performance is evaluated using Precision, Recall, and F1-score. Overall, ChatGPT outperforms other approaches when applying to the less-resourced Spanish tweets while monolingual XLMR surpassed other performances in rich-resourced English comments.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Hope Speech</kwd>
        <kwd>Transformers</kwd>
        <kwd>Multilingual</kwd>
        <kwd>Cross-lingual</kwd>
        <kwd>ChatGPT</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Discriminatory messages, such as Hate Speech, are prevalent on social media platforms and
often target individuals because of their race, ethnicity, gender, sexual orientation, nationality,
or religion. Research [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] has been conducted on the significance of social media in the lives of
vulnerable groups (e.g., the LGBT community, people with disabilities, and racial minorities) and
the findings indicate that a vulnerable individual’s social media engagement is crucial in shaping
their personality and their perspectives on society [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. To counteract the negative impact of
Hate Speech, our study focuses on Hope Speech, which comprises encouraging, supportive, and
inclusive messages that can relax a hostile environment and motivate several people efectively
when they are in times of sufering from illness, stress, loneliness, or depression [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Unlike the well-established domain of Hate Speech Detection, the detection of Hope Speech is
a relatively nascent area of research. Inspired by the second workshop on Language Technology
for Equality, Diversity, and Inclusion (LT-EDI-2022) for five languages (Tamil, Malayalam,
Kannada, English, and Spanish), IberLEF 2023 Task - HOPE. Multilingual Hope Speech Detection
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], in pursuit of equality, diversity, and inclusion, focuses on Hope Speech Detection in two
languages, Spanish and English, with expansion and improvement in both corpora. Given the
Spanish tweets [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and English Youtube comments [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] of the task, we aim to identify whether
it contains Hope Speech (HS) or not (NHS).
      </p>
      <p>The novelties of our contribution are threefold: (1) Examining whether the language model’s
performance improves when additional training data from another language is added; (2)
Examining the language model’s capacity to apply knowledge of Hope Speech learned in one
language to another unseen language in both directions (from rich-resourced corpus to
lessresourced one and vise versa); and (3) Investigating the extent to which sensitive words can
cause data poisoning by influencing the language and prompting model’s predictions. Our work
brings valuable insights for Hope Speech Detection in the less-resourced languages niche.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>The emergence and widespread use of social media platforms have significantly transformed
how people communicate and, consequently, led to a significant efort of research dedicated
to detecting harmful content, such as ofensive language, toxic comments, and Hate Speech
Detection. However, on the reverse side, Hope Speech Detection is a relatively novel field of
study and currently has limited available literature reviews.</p>
      <p>
        In an initial work by Palakodety et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], the identification of hopeful language in YouTube
comments was explored, specifically in the context of the Pakistan and India war. Until now, only
a small number of datasets have been developed for the Hope Speech Detection task, such as
the HopeEDI dataset [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which contains YouTube comments in English, Tamil, and Malayalam;
the KanHope dataset [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which includes YouTube comments in code-mixed Kannada-English
language; and the SpanishHopeEDI dataset [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which covers LGBT-related tweets in Spanish.
      </p>
      <p>
        Hossain et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], Aggarwal et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], S et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] employed various machine learning and
deep learning models (e.g., Support Vector Machine, Naive Bayes, Ensemble, Convolutional
Neural Network, Bi-LSTM, and Recurrent Neural Network) in combination with word
embeddings (e.g., TF-IDF, Word2vec, and fastText) to handle the HopeEDI dataset. Additionally,
Hossain et al. [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], Mahajan et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], Aggarwal et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] used Transformer-based techniques
for fine-tuning multiple language models, including mBERT, XLMR, Indic-BERT, and XLNet.
Furthermore, Puranik et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] employed the fine-tuned ULMFit model, while Ghanghor et al.
[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] used customized mBERT and customized XMLR. Moreover, Hande et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] fine-tuned the
Dual-channel BERT4Hope model for the code-mixed language in the KanHope dataset.
      </p>
      <p>
        Regarding incorporating ChatGPT into NLP tasks, recent research has primarily focused on
evaluating ChatGPT’s performance in text generation [
        <xref ref-type="bibr" rid="ref15 ref16 ref17 ref18">15, 16, 17, 18</xref>
        ] and question-answering
[
        <xref ref-type="bibr" rid="ref19 ref20 ref21">19, 20, 21</xref>
        ] tasks. In addition, Kocoń et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]’s study also evaluated ChatGPT’s performance
in various NLP tasks, including ofensiveness detection, while in Huang et al. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], the authors
assessed its ability to generate explanations for implicitly hateful speech.
      </p>
      <p>
        In the realm of considering the task as a sequence-classification problem, our work adopts
a similar evaluation of multilingual and cross-lingual learning that has been applied to other
languages in other downstream tasks (e.g., sentiment analysis [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], keyword or term extraction
[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], to mention a few). Meanwhile, we evaluated the performance of ChatGPT to our specific
tasks under the inspiration of Huang et al. [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] for hateful speech. None of our two directions
had been previously explored in Hope Speech Detection.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Dataset</title>
      <p>
        In the context of IberLEF 2023 Task - HOPE. Multilingual Hope Speech Detection [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], we focus
on two proposed corpora with informal texts from two diferent social platforms, including
training, development, and testing sets of Spanish tweets [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] from Twitter and English comments
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] from Youtube. For both corpora, the data distribution is presented in Table 1 while the
sequence length distribution for each class in the training and development sets is presented in
Figure 1.
      </p>
      <p>Table 1 demonstrates the two diferent characteristics of the two datasets: (1) English
comments can be considered as a rich-resourced corpus with 25,450 examples collected for training
and development and 4,805 examples for testing. In contrast, Spanish tweets belong to a
lessresourced corpus where the amount of training and development examples is approximately 16
times smaller than the number of English comments; (2) The label distribution in the training
and development phase is highly imbalanced in the English set where the amount of HS
examples is 10 times larger than the amount of NHS samples. Meanwhile, the Spanish set contains a
relatively balanced amount of HS and NHS examples.</p>
      <p>Besides, Figure 1 provides another insight on the distribution of sequence length, that is, the
number of words within a sequence. Although English comments have a positively skewed
distribution, most of the English comments and Spanish tweets are short, which leads to a lack
of context for the language model training phase. Additionally, both use informal language,
which includes misspellings (e.g., Hi, teachet Hadar. Youre my favorite teachet. L liked your
explanation very much...), emojis, and onomatopoeias (e.g., Hmmm..., ). Figure 2 and 3 depict
data samples from the corpus used in this task.</p>
      <p>Text: I feel so base for that guy! They treated him as if he wasn’t a human just because of who he
loved!
Category: HS
Text: So did I. She’s clearly grooming herself and behaving in a more masculine manner. Lol when
she said she didn’t agree I was like “...you don’t?”</p>
      <p>Category: NHS</p>
      <p>Text: Cada 28 de Junio me levanto escuchando Born This Way. Sigue siendo un himno. Porque
cada persona del colectivo #LGTB+ nació así. Porque nadie escogería a propósito una vida de
opresión.Sigamos luchando por nuestra dignidad.Feliz #Orgullo #Orgullo2021
Category: HS
Text: Ánimo, donde están los defensores del mal llamado movimiento #LGTBI ? Los mismos q
defienden la Inmigración sin barrerasDONDE?</p>
      <p>Category: NHS</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <sec id="sec-4-1">
        <title>4.1. Sequence Classification</title>
        <p>
          We consider our task as an experiment on sequence classification problems with XLMR [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ], a
Transformer-based model pretrained on 2.5TB of filtered CommonCrawl data containing 100
languages. Three experimental setups are proposed to verify our first two hypotheses, including
monolingual, multilingual, and cross-lingual learning.
        </p>
        <p>• Monolingual setup: We fine-tune two monolingual XLMR classifiers for English and
Spanish, respectively, and test each model in the same language. In this scenario, we
evaluate how well the model performs when there is a match between the language of
the training set and the test set.
• Cross-lingual setup: We fine-tune XLMR classifiers in a language and test it on another
language not appearing in the training set. In this scenario, we examine the model’s
performance to apply knowledge of Hope Speech learned in a language to another unseen
language. Note that we consider both directions: transferring knowledge from a
richresourced language (e.g., English) to a lesser-resourced language (e.g., Spanish) and vice
versa.
• Multi-lingual setup: We fine-tune our classifier using the training set from both
languages and then apply the model to the test sets of all languages, respectively. In this
scenario, we examine whether the language model’s performance improves when
additional training data from another language is added.</p>
        <p>We divide the dataset into training, validation, and testing sets, where the training-development
set is split into two subsets of training and validation set, separately with a ratio of 3:1. The
training split is used for fine-tuning the models while the validation split is used to prevent
over-fitting during the fine-tuning phase. Finally, the test split is used for evaluation and is
excluded during model training. More specifically, the model is fine-tuned on the training set to
predict the probability for each sequence whether it is Hope Speech (HS) or not (NHS).</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Prompting</title>
        <p>We formulate two diferent prompting scenarios for the English and Spanish datasets. The
Spanish dataset focuses solely on the LGBT+ community, and therefore, the prompt will furnish
both sentence context and criteria for determining whether a text is classified as HS or NHS.
Figure 4 illustrates the prompting sample for the Spanish dataset while Figure 6 describes the
prompting scenario for the English dataset.</p>
        <p>Describe the sentiment of the given text using one of these two attributes: “Hope Speech”,
“Non Hope Speech”. Knowing that a text is considered as “Hope Speech“ if the text: (1)
explicitly supports the social integration of minorities; (2) is a positive inspiration for the
LGTBI community; (2) explicitly encourages LGTBI people who might find themselves in a
situation; or (4) unconditionally promotes tolerance. And knowing that a text is considered
as “Non Hope Speech” if it: (1) expresses negative sentiment towards the LGTBI community;
(2) explicitly seeks violence; or (3) uses gender-based insults. Do not give explanation, give
only an answer as either “Hope Speech” or “Non Hope Speech”. Text: Hilo- Hoy en día del
#OrgulloLGTBI hoy más que nunca los derechos civiles , deben ser sobre el individuo sin excluir a
nadie por sus gusto, orientaciones o religión . Eso ya es de libertad del individuo. Pero tampoco
caigamos en leyes especiales para personas o grupos</p>
        <sec id="sec-4-2-1">
          <title>ChatGPT answer: Hope Speech.</title>
          <p>Describe the sentiment of the given text using one of these two attributes: ‘Hope Speech’,
‘Non Hope Speech’. Knowing that the domains are relevant to social topics, such as Equality,
Diversity and Inclusion, including LGBTIQ issues, COVID-19, women in STEM, or Black Lives
Matter. Do not give explanation. Select one of these two labels: ‘Hope Speech’, ‘Non Hope
Speech’. Text: So pulling a statue out is going to re-write the past is it? Mindless fools. It would
be diferent if there still was slavery in Bristol. But the present has moved on from the past and
these moronic individuals need to also.</p>
          <p>ChatGPT answer: Non Hope Speech.</p>
          <p>Describe the sentiment of the given text using one of these two attributes: “Hope Speech”,
“Non Hope Speech”. Do not give explanation. Select one of these two labels: “Hope Speech”,
“Non Hope Speech”. Text: So pulling a statue out is going to re-write the past is it? Mindless
fools. It would be diferent if there still was slavery in Bristol. But the present has moved on from
the past and these moronic individuals need to also.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Evaluation metrics</title>
        <p>We measure the performance of our proposed classifiers using Precision, Recall, and F1-score
(F1) per category and averaged using the macro-average method. This is also the evaluation
metrics suggested by the IberLEF 2023 Task - HOPE. Multilingual Hope Speech Detection, which
makes our work more comparable with the related work.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results and Analysis</title>
      <sec id="sec-5-1">
        <title>5.1. Results</title>
        <p>Table 2 presents the performance of our approaches in both sequence classification and
prompting regarding Precision, Recall, F1-score per class, and macro-average F1-score where the
macro-average F1-score will be used for the final ranking of the model’s performance.
Regarding the sequence-classification mechanism, the results demonstrate that rich-resourced
cross-lingual learning (i.e., using knowledge from English to classify Spanish texts) can
significantly enhance the model’s performance. Specifically, the cross-lingual setup in Spanish
outperforms the monolingual setup by up to 7 percentage points (pp) in F1-score and up to 5
pp compared to the multilingual setup. However, for rich-resourced languages (e.g., English),
the monolingual setup performs better without the need for additional knowledge from other
less-resourced languages.</p>
        <p>On the other hand, the best prompting with ChatGPT surpasses the performance of other
sequence-classification approaches we applied with a large margin for all three evaluation
metrics regarding the clean and less-resourced language, Spanish, but failed to capture the
information from English contexts to provide a good classification.</p>
        <p>In addition, the stability of the sequence-classification approach was evident as it achieved the
sixth position on the oficial leaderboards for both the English and Spanish datasets. Conversely,
the prompting method utilizing ChatGPT displayed potential inconsistency, outperforming
other groups with first place for the Spanish dataset but finishing last for the English dataset.
This result highlights the advantage of the ChatGPT prompting approach, which benefitted
from the precise information provided by designated prompts. For instance, in the case of the
Spanish corpus, the prompts provided specific scenarios, contexts, and domains where hope
speech or non-hope speech occurred, leading to significantly improved accuracy. In contrast,
for the English corpus, the information regarding the context and situation in which hope
speech and non-hope speech happened was broader and less specific. As a result, we observed
a notable decrease in performance.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Error analysis</title>
        <sec id="sec-5-2-1">
          <title>5.2.1. Data ambiguity</title>
          <p>Regarding the English datasets, there exist multiple instances of ambiguity which directly afect
the performance of both proposed mechanisms, especially the prompting. For example, these
sentences do not express any attitude towards Hope Speech or Non-Hope Speech, such as
“Its for the birds”, “Peter Peter wake up” or “When I was little”, or the sentence is too short and
composed of only one or two words, such as “Lol”, “Elo”, “Hahaa”, or “The sweetest”.</p>
          <p>Describe the sentiment of the given text using one of these two attributes: ‘Hope Speech’,
‘Non Hope Speech’. Knowing that the domains are relevant to social topics, such as Equality,
Diversity and Inclusion, including LGBTIQ issues, COVID-19, women in STEM, or Black Lives
Matter. Do not give explanation. Select one of these two labels: ’Hope Speech’, ’Non Hope
Speech’. Text: The sweetest
ChatGPT answer: I’m sorry, but the given text "The sweetest" does not contain enough information
to determine its sentiment or whether it belongs to the category of “Hope Speech” or “Non Hope
Speech”. Could you please provide me with a more substantial text to work with?</p>
          <p>Regarding the ChatGPT’s initial prompt scenario of the English dataset, where we provided
the chatbot with context about the subject matter of comments, there were multiple instances
where it failed to answer due to the natural ambiguity of the input text. Although we forced the
chatbot not to provide any explanation for its response, in these situations, ChatGPT still gave
a reason for being unable to perform the task, as described in Figure 7. To address the problem,
we carried out the second scenario prompt without providing the context of the comments.
This approach forced ChatGPT, and as a result, it was generally successful in assigning labels
for those ambiguous instances.</p>
          <p>In contrast, the Spanish dataset contains very rare ambiguous instances, which partially help
ChatGPT perform significantly better than other methods and also than it does on the English
dataset.</p>
        </sec>
        <sec id="sec-5-2-2">
          <title>5.2.2. Model Bias</title>
          <p>We noticed some biased responses from ChatGPT, particularly when it relates to sensitive words,
such as the word “Trump”, which usually refers to President Donald Trump. As indicated in
Figure 8, we evaluated ChatGPT’s performance using a subset of the training and development
set, where we discovered that for several sentences that pertain to President Donald Trump
or merely include the word “Trump” or “Donald”, ChatGPT categorized them as NHS, despite
the expected correct label being HS. Other sentences also exhibit similar patterns of behavior,
such as “Even Israelis here don’t think that Trump’s stargate is racist”, and “There is a God who is
benevolent. Intelligence and physical attractiveness are more important than anything else. She is
probably one of the most intelligent CEOs and the most attractive speaker on TED Talks ever.”.</p>
          <p>Describe the sentiment of the given text using one of these two attributes: ‘Hope Speech’,
‘Non Hope Speech’. Knowing that the domains are relevant to social topics, such as Equality,
Diversity and Inclusion, including LGBTIQ issues, COVID-19, women in STEM, or Black Lives
Matter. Do not give explanation. Select one of these two labels: ‘Hope Speech’, ‘Non Hope
Speech’. Text: I’m a trump supporter. I’m anti Islam and anti feminism BUT I’m pro LGBT pro
women’s rights (just not for all the femenazi BS.</p>
          <p>ChatGPT answer: Non Hope Speech.</p>
        </sec>
        <sec id="sec-5-2-3">
          <title>Ground truth: Hope Speech.</title>
          <p>To understand the cause of this phenomenon, we allow ChatGPT to explain its answer. The
results in Figure 9 demonstrate that ChatGPT can associate the word “trump” with the former
US President Donald Trump. However, due to the sensitive and controversial nature of the topic,
ChatGPT categorized all related responses as NHS to avoid potential false positive alerts. It also
explains the relatively lower performance of ChatGPT with the English dataset compared to
other approaches.</p>
          <p>On the contrary, most of the instances in the Spanish dataset clearly convey an attitude
or sentiment towards the LGBT+ community. This also explains why ChatGPT performs
significantly better than other methods as well as than it does on the English dataset.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In conclusion, we presented two diferent mechanisms to identify whether the text contains
Hope Speech or not and verified our hypothesis regarding whether the language model improves
the performance when additional training data from another language and the model’s capacity
to apply knowledge in a language to another unseen one regarding sequence classification</p>
      <p>Describe the sentiment of the given text using one of these two attributes: ‘Hope Speech’,
‘Non Hope Speech’. Select one of these two labels: ‘Hope Speech’, ’Non Hope Speech’. Text:
go trump
ChatGPT answer: The sentiment of the given text “go trump” is a “Non Hope Speech”. This is
because the statement is an expression of support for former US President Donald Trump, and it
does not convey a hopeful or positive message related to any of the mentioned social topics. The
word "Trump" is often associated with controversial and divisive political opinions and actions,
and expressing support for a particular politician does not necessarily convey a positive sentiment
towards any of the social issues mentioned.
task such as Hope Speech Detection. We also evaluated the impact of prompting scenarios
on ChatGPT’s predictions and examined the ways in which ChatGPT can be influenced and
prejudiced by sensitive words. These words have the potential to be manipulated in a harmful
manner through data poisoning, thereby directing ChatGPT’s responses. The experiments were
conducted on two datasets from diferent social networks with informal languages proposed
by IberLEF 2023 Task, including Spanish tweets and English Youtube comments. Overall,
prompting with ChatGPT outperforms other approaches when applying to the less-resourced
Spanish tweets while monolingual XLMR surpassed other performances in rich-resourced
English comments.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The authors express their gratitude for the 10-year friendship that they have shared, which has
been a continuous source of motivation and inspiration for this paper. This friendship has not
only made our research work enjoyable but also enriched and created HOPE in our personal
lives. We are thankful to each other for the unwavering support and collaborative eforts that
have culminated in this work. The author also would like to express their gratitude to their
beloved kitty, Ðô Ðô, for being an absolute rockstar throughout the writing of this paper - even
if it meant using the laptop as a comfy throne. Purrfect job, Ðô Ðô!</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Montes-y Gómez, Overview of IberLEF 2023: Natural Language Processing Challenges for Spanish and other Iberian Languages, in: Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2023), co-located with the 39th Conference of the Spanish Society for Natural Language Processing (SEPLN 2023), CEURWS</article-title>
          .org,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <article-title>Hopeedi: A multilingual hope speech detection dataset for equality, diversity, and inclusion</article-title>
          ,
          <source>in: Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>53</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Burnap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Colombo</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Scourfield,</surname>
          </string-name>
          <article-title>Machine classification and analysis of suicide-related communication on twitter</article-title>
          ,
          <source>in: Proceedings of the 26th ACM conference on hypertext &amp; social media</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>75</fpage>
          -
          <lpage>84</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. N.</given-names>
            <surname>Milne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Pink</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hachey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Calvo</surname>
          </string-name>
          ,
          <article-title>Clpsych 2016 shared task: Triaging content in online peer-support forums</article-title>
          ,
          <source>in: Proceedings of the third workshop on computational linguistics and clinical psychology</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>118</fpage>
          -
          <lpage>127</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Jiménez-Zafra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>García-Cumbreras</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>García-Baena</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>García-Díaz</surname>
            ,
            <given-names>B. R.</given-names>
          </string-name>
          <string-name>
            <surname>Chakravarthi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Valencia-García</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          <string-name>
            <surname>Ureña-López</surname>
          </string-name>
          , Overview of HOPE at IberLEF 2023:
          <article-title>Multilingual Hope Speech Detection</article-title>
          ,
          <source>Procesamiento del Lenguaje Natural</source>
          <volume>71</volume>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>García-Baena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Á</surname>
          </string-name>
          .
          <string-name>
            <surname>García-Cumbreras</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          <string-name>
            <surname>Jiménez-Zafra</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>García-Díaz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Valencia-García</surname>
          </string-name>
          ,
          <article-title>Hope speech detection in spanish: The lgbt case</article-title>
          ,
          <source>Language Resources and Evaluation</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Palakodety</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>KhudaBukhsh</surname>
          </string-name>
          , J. G. Carbonell,
          <article-title>Hope speech detection: A computational analysis of the voice of peace</article-title>
          ,
          <year>2020</year>
          . arXiv:
          <year>1909</year>
          .12940.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Priyadharshini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sampath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Thamburaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chandran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <article-title>Hope speech detection in under-resourced kannada language</article-title>
          ,
          <source>ArXiv abs/2108</source>
          .04616 (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Hossain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sharif</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>M. Hoque, NLP-CUET@LT-EDI-EACL2021: Multilingual code-mixed hope speech detection using cross-lingual representation learner</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion</source>
          , Association for Computational Linguistics, Kyiv,
          <year>2021</year>
          , pp.
          <fpage>168</fpage>
          -
          <lpage>174</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .ltedi-
          <volume>1</volume>
          .
          <fpage>25</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Aggarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chandana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nemade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saumya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Biradar</surname>
          </string-name>
          ,
          <article-title>Hope speech detection on social media platforms</article-title>
          ,
          <year>2022</year>
          . arXiv:
          <volume>2212</volume>
          .
          <fpage>07424</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>T. S</surname>
          </string-name>
          , R. t. Tasubilli,
          <string-name>
            <surname>K.</surname>
          </string-name>
          <article-title>Sai rahul, Amrita@LT-EDI-EACL2021: Hope speech detection on multilingual text</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion</source>
          , Association for Computational Linguistics, Kyiv,
          <year>2021</year>
          , pp.
          <fpage>149</fpage>
          -
          <lpage>156</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .ltedi-
          <volume>1</volume>
          .
          <fpage>22</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>K.</given-names>
            <surname>Mahajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Al-Hossami</surname>
          </string-name>
          , S. Shaikh,
          <article-title>TeamUNCC@LT-EDI-EACL2021: Hope speech detection using transfer learning with transformers</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion</source>
          , Association for Computational Linguistics, Kyiv,
          <year>2021</year>
          , pp.
          <fpage>136</fpage>
          -
          <lpage>142</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .ltedi-
          <volume>1</volume>
          .
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Puranik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Priyadharshini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thavareesan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <article-title>IIITT@LTEDI-EACL2021-hope speech detection: There is always hope in transformers</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion</source>
          , Association for Computational Linguistics, Kyiv,
          <year>2021</year>
          , pp.
          <fpage>98</fpage>
          -
          <lpage>106</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .ltedi-
          <volume>1</volume>
          .
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Ghanghor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Priyadharshini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thavareesan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <article-title>IIITK@LT-EDI-EACL2021: Hope speech detection for equality, diversity, and inclusion in Tamil , Malayalam and English</article-title>
          ,
          <source>in: Proceedings of the First Workshop on Language Technology for Equality, Diversity and Inclusion</source>
          , Association for Computational Linguistics, Kyiv,
          <year>2021</year>
          , pp.
          <fpage>197</fpage>
          -
          <lpage>203</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .ltedi-
          <volume>1</volume>
          .
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          , L. Jing,
          <article-title>Is chatgpt a good keyphrase generator? a preliminary study</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2303</volume>
          .
          <fpage>13001</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Jiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lyu</surname>
          </string-name>
          ,
          <article-title>Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2303</volume>
          .
          <fpage>13648</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <article-title>How close is chatgpt to human experts? comparison corpus, evaluation, and detection</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2301</volume>
          .
          <fpage>07597</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>O.</given-names>
            <surname>Aydın</surname>
          </string-name>
          , E. Karaarslan,
          <article-title>Openai chatgpt generated literature review: Digital twin in healthcare</article-title>
          ,
          <source>SSRN Electronic Journal</source>
          (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .2139/ssrn.4308687.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Min</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Qi, Evaluation of chatgpt as a question answering system for answering complex questions</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2303</volume>
          .
          <fpage>07992</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>R.</given-names>
            <surname>Omar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Mangukiya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kalnis</surname>
          </string-name>
          , E. Mansour,
          <article-title>Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2302</volume>
          .
          <fpage>06466</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>N.</given-names>
            <surname>Bian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <surname>L</surname>
          </string-name>
          . Sun,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <article-title>Chatgpt is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large language models</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2303</volume>
          .
          <fpage>16421</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kocoń</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Cichecki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kaszyca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kochanek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Szydło</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Baran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bielaniewicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gruza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Janz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kanclerz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kocoń</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Koptyra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Mieleszczenko-Kowszewicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Miłkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Oleksy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Piasecki</surname>
          </string-name>
          , Łukasz Radliński,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wojtasik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Woźniak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kazienko</surname>
          </string-name>
          , Chatgpt: Jack of all trades,
          <source>master of none</source>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2302</volume>
          .
          <fpage>10724</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>F.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kwak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>An</surname>
          </string-name>
          ,
          <article-title>Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech</article-title>
          ,
          <source>ArXiv abs/2302</source>
          .07736 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pelicon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pranjić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Miljković</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Škrlj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pollak</surname>
          </string-name>
          ,
          <article-title>Zero-shot learning for cross-lingual news sentiment classification</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>10</volume>
          (
          <year>2020</year>
          )
          <fpage>5993</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>H. T. H.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martinc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Doucet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pollak</surname>
          </string-name>
          ,
          <article-title>Can cross-domain term extraction benefit from cross-lingual transfer?</article-title>
          ,
          <source>in: Discovery Science: 25th International Conference, DS</source>
          <year>2022</year>
          , Montpellier, France,
          <source>October 10-12</source>
          ,
          <year>2022</year>
          , Proceedings, Springer,
          <year>2022</year>
          , pp.
          <fpage>363</fpage>
          -
          <lpage>378</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Conneau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Khandelwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Wenzek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Guzmán</surname>
          </string-name>
          , E. Grave,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>Unsupervised cross-lingual representation learning at scale</article-title>
          , arXiv preprint arXiv:
          <year>1911</year>
          .
          <volume>02116</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>