<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>“An bhfuil Gaeilge agat?”: Diferences in User Interaction and Assistant Responses Across Languages of European Origin in Large-scale Conversational Datasets</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aldan Creo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Independent author</institution>
          ,
          <addr-line>Dublin, IE</addr-line>
        </aff>
      </contrib-group>
      <fpage>102</fpage>
      <lpage>113</lpage>
      <abstract>
        <p>This study presents a comprehensive analysis of user interactions and assistant responses across 28 European languages using the WildChat and LMSYS datasets, addressing an existing gap in the understanding of multilingual conversational AI. We examine five specific dimensions: the topics discussed, the length of the conversations, the sentiment expressed, the toxicity of the interactions, and the quality of the responses. Our findings indicate significant cross-linguistic variations that have significant implications for the development and deployment of language models. Topic analysis shows a high degree of overlap across languages, indicating that users engage with similar subjects regardless of their linguistic background. We observe a positive correlation between the frequency of language use and conversation length, which suggests that diferent engagement patterns may be at play across language communities. Sentiment analysis indicates a high degree of consistency in neutral tones across languages, whereas toxicity levels vary considerably, with some languages exhibiting notably elevated scores. To assess response quality, we introduce a custom neural architecture based on the classification of user-assistant interaction triples. Our model achieved an accuracy of 0.82 and served to uncover variations in user satisfaction across language groups. Speakers of Romance languages exhibited higher levels of satisfaction, whereas those of Eastern European languages tended to show lower satisfaction with their interactions with the assistant. Our findings underscore the need for language-specific strategies in conversational AI development, particularly in content moderation, conversation design, and quality assessment. By highlighting the diferences and commonalities in conversational interactions across languages, our work provides insights for researchers and developers seeking to better understand and address the needs of users across a diverse linguistic landscape.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>Natural Language Processing</kwd>
        <kwd>Conversational AI</kwd>
        <kwd>Multilingualism</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The development of conversational assistants has seen significant advancements in recent years,
following the advent of Transformer-based architectures [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], as exemplified by the GPT family of models.
These assistants are capable of engaging in conversations with users on a wide range of topics, providing
information, answering questions, and even engaging in small talk [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The popularity of conversational assistants has therefore surged across various applications, including
customer service and language learning [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. Notably, ChatGPT exceeded 100 million monthly active
users within its first two months, a testament to the rising interest in conversational AI [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However,
despite this booming interest, research and development in the field remain predominantly
Englishcentric, with limited focus on other languages [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
      </p>
      <p>
        A branch of research that is key to the development of conversational assistants is the analysis of
user interactions and assistant responses in a large scale. Two main works in this area due to their
size and diversity are the WildChat and LMSYS datasets [
        <xref ref-type="bibr" rid="ref8">8, 9</xref>
        ], which contain a large number of user
interactions with conversational assistants and are publicly available. However, there is still work to be
done in analyzing these datasets, due to their size and complexity.
      </p>
      <p>This study aims to explore user interactions and assistant responses, focusing on linguistic diferences</p>
      <p>lisEnhg trsPoeeuug isanhpS rcFenh raenGm liItaan lisPoh iirkaannU tcuhD rkeeG isanhD isehSdw zcehC iraaunngH ioaannRm liicaanG ltaaanC iroaengNw liraaungB lkvoaS iisFnnh itroaanC livoaennS itLvaan itsoaEnn ltsaeeM iIrsh iitLaanuhn
across European languages. Previous research has only examined datasets as a whole [10], without
delving into cross-language discrepancies—a gap our study seeks to fill. In contrast, we seek to provide
insights into cross-language diferences within large-scale conversational datasets, in order to inform the
development of assistants that are better tailored to the needs of users of a wider range of languages. Our
analysis focuses on language groups rather than the countries in which users reside. We acknowledge
that languages like Spanish or Portuguese may have significant representation from Latin American
users, contributing to the diversity of our findings.</p>
      <p>The rest of this paper is structured as follows. The subsequent sections examine whether notable
diferences exist across languages in terms of topics discussed (RQ1), length of interactions (RQ2),
sentiment expressed (RQ3), toxicity (RQ4), and quality (user satisfaction) of responses (RQ5). We
then proceed to an integrative discussion and conclusion.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Datasets</title>
      <p>In this section, we describe the datasets used for our analysis.</p>
      <p>
        We combined the WildChat and LMSYS datasets [
        <xref ref-type="bibr" rid="ref8">8, 9</xref>
        ], which contain 990,372 and 1,000,000
examples, respectively. Nevertheless, the count of examples we analyze is lower, as we applied several
preprocessing steps to clean the data. First, we discarded examples where the user’s initial message
contained fewer than five words, as short inputs hinder the accuracy of language detection. For
instance, messages like “Test” were often misclassified, such as being incorrectly identified as Estonian.
Furthermore, we excluded interactions where the toxicity scores were not defined, as these annotations
are essential for our toxicity analysis.
      </p>
      <p>To ensure a comprehensive representation of European languages, we included 28 languages, shown
in Figure 1, together with the number of examples in each. However, we excluded languages such as
Basque and Breton because of the lack of a suficient number of examples in the datasets, as well as the
lack of language models trained specifically for these languages, which could introduce bias when using
them in our analysis. After applying these filtering criteria, we obtained a total of 781,376 examples.</p>
    </sec>
    <sec id="sec-3">
      <title>3. RQ1: Topics</title>
      <p>In this section, we analyze the distribution of topics across languages. We utilize language tags as a
clustering feature for semantically-informed embeddings. We hypothesize that if speakers of diferent
languages tend to discuss distinct topics, language tags should form clusters with clear boundaries.
However, if topics are more uniformly distributed across languages, the boundaries will be blurrier.</p>
      <p>We commence our analysis by selecting all initial user messages in the dataset, as these are the most
representative, establishing the context for subsequent discourse. We then use two multilingual models
from the Sentence Transformers library [11]—paraphrase-multilingual-mpnet-base-v2 and
paraphrase-multilingual-MiniLM-L12-v2—to generate embeddings of these messages.</p>
      <p>To evaluate how well messages cluster based on language, we conduct a silhouette score analysis
(ranging from -1 to 1), which measures cluster cohesion and separation. A score of 1 indicates
welldefined, separated clusters, while a score of 0 suggests overlap. Negative scores indicate poorly defined
or incorrect clusters [12].</p>
      <p>Due to the high computational cost of this analysis, which scales quadratically with the number
of examples, we performed the silhouette score calculation on 20 randomly-selected subsets, each
representing 5% of the dataset, and then averaged the results. The mean silhouette scores for the two
models were − 0.121(24) — mean(s.d.) — and − 0.115(30). These negative scores indicate significant
semantic overlap between clusters, confirming that languages are not efective clustering tags.</p>
      <p>Per-language silhouette scores, which align with the overall results, are provided in the supplementary
material. Based on this analysis, we conclude that users across linguistic groups engage with similar
themes, rather than showing strong language-specific patterns. This finding highlights the universality
of topics and suggests that conversational assistants must prioritize general topic coverage and flexibility.
Moreover, the lack of clustering implies that cultural or regional nuances may play a smaller role in
topic diferentiation than previously expected.</p>
      <p>Polish
Estonian</p>
      <p>Greek
HuCnzgeacrhian
GaliciaNnorwegianFinnish</p>
      <p>Slovenian
Latvian</p>
      <p>English
Italian</p>
      <p>Portuguese Spanish</p>
      <p>GermanFrench</p>
      <p>Dutch</p>
      <p>Ukrainian
Slovak</p>
      <p>Romanian</p>
      <p>Danish</p>
      <p>Swedish</p>
      <p>Catalan
Bulgarian</p>
      <p>Croatian
101 Lithuanian Maltese</p>
      <p>Irish</p>
      <p>60</p>
      <p>Number of user words
20
40
80
100</p>
    </sec>
    <sec id="sec-4">
      <title>4. RQ2: Length</title>
      <p>In this section, we explore the correlation between the mean number of words written by users in a
conversation and the number of examples per language.</p>
      <p>As both variables are not normally distributed, we study their interaction by calculating the Spearman
Rank correlation coeficient, which ranges from -1 to 1. A value of -1 indicates a perfect negative
correlation; 0, no correlation; and 1 signifies a perfect positive correlation. The coeficient evaluates to
0.51, indicating a moderate positive correlation between the number of user words and the number of
examples in a language. With a p-value of 0.0055 &lt;  = 0.05, the correlation is statistically significant.</p>
      <p>While not a perfect correlation, the strength of the relationship suggests that the number of user
words can be considered a reasonably reliable predictor of the number of examples in a language. Figure
2 presents a scatter plot illustrating this relationship. Additionally, Table 1 provides the per-conversation
mean number of messages, user words, and assistant words for each language.</p>
      <p>In essence, we find that speakers of languages with more interactions in conversational datasets
tend to engage in longer conversations, as measured by the number of user words. This trend provides
moderate support for the notion that there exist behavioral diferences across groups of language users,
which may be indicative of cultural or linguistic factors influencing conversation length. However,
further research is warranted to ascertain the underlying reasons for these diferences.</p>
    </sec>
    <sec id="sec-5">
      <title>5. RQ3: Sentiment</title>
      <p>In this section, we explore sentiment diferences across languages, as it allows to understand the
emotional tone of conversations and provides insights into both user experience and interaction quality.</p>
      <p>We use the pre-trained multilingual classification model twitter-XLM-roBERTa-base [13] for our
analysis. This model is particularly suited to our multilingual setting, as it has been trained on a large,
diverse corpus of Twitter messages in various languages.</p>
      <p>Our analysis focuses on both the user and the assistant first messages in each conversation, as these
set the tone and are likely the most representative of the overall sentiment. We calculate sentiment
scores for both messages, then aggregate these scores across languages. We show the results in Table 2.</p>
      <p>One clear observation is that the assistant’s messages tend to be more positive than the users’, with
an overall mean of 0.2205(15) compared to 0.16(11). This diference likely reflects the Reinforcement
Learning from Human Feedback (RLHF) paradigm used for training [14], which encourages the assistant
to maintain a more positive and helpful tone. Interestingly, the assistant also shows a slightly higher
negative sentiment score (0.28(20)) than the users (0.26(20)). This could also be attributed to RLHF,
which prompts it to refrain from engaging in potentially toxic conversations, thus increasing the
frequency of negative sentiment classifications in those contexts [ 15]. Overall, however, both user and
assistant messages tend to be neutral, with a mean neutral score of 0.50(18) and 0.58(19), respectively.</p>
      <p>These observations show that while there are diferences in the sentiment expressed by users and
assistants, the sentiment across languages tends to remain fairly neutral.</p>
    </sec>
    <sec id="sec-6">
      <title>6. RQ4: Toxicity</title>
      <p>We perform a toxicity analysis to identify potential diferences, for every message in the dataset (a total
of 2,951,678). There exist 11 types of toxicity annotations generated by the dataset creators with the
OpenAI Moderations API. To simplify our analysis, we aggregate these categories into two general
toxicity scores using the mean and maximum values across the categories. We do this for each example.
Then, we calculate the mean of the two across the examples, which we present in Table 3.</p>
      <p>In the rest of this section, we focus on the perspective of the user messages, as they are generally
causative of the toxicity in the assistant’s side of the conversation. To identify significant diferences
in toxicity across languages, we employ the Kruskal-Wallis test. This is a non-parametric method
well-suited to compare medians of toxicity scores across multiple independent samples, especially
given the non-normal distribution of toxicity values, which cluster near the extremes. We test a null
hypothesis 0 = “The medians of the toxicity scores are equal across languages” with  = 0.05.</p>
      <p>Since toxicity scores are continuous and generally skewed towards values close to 0, directly applying
the Kruskal-Wallis test might exaggerate diferences between languages due to minor deviations in
low-toxicity messages. To mitigate this, we round the scores to two decimal places, reducing the number
of unique values and treating very close values as ties. The outcomes of the Kruskal-Wallis test, as well
as independent analyses for each toxicity category and aggregated scores, are presented in Table 4.</p>
      <p>With the exception of the "self-harm" category, the p-values for all other categories are less than  ,
indicating significant diferences in toxicity across languages. This necessitates pairwise comparisons
to determine which languages exhibit meaningful disparities. We use Dunn’s test for this purpose, with
the null hypothesis 0 = “The probability that a randomly selected message from one language has a
higher toxicity score than one from another language is 0.5.” To control for the increased risk of Type I
errors due to multiple comparisons, we apply the Bonferroni correction to the p-values. The results of
these pairwise comparisons for the averaged toxicity scores are shown in Figure 3, while results for
other configurations are included in the supplementary materials.</p>
      <p>Key findings from the pairwise comparisons include:
1. No Statistically Significant Diferences : For most languages—such as Bulgarian, Croatian, Czech,
Danish, Estonian, Finnish, Galician, Greek, Hungarian, Irish, Italian, Latvian, Lithuanian, Maltese,
Norwegian, Polish, Romanian, Slovak, and Slovenian—the comparisons are not statistically
significant. In these cases, we cannot reject 0, indicating it is equally likely that a randomly
selected message from one of these languages has a higher or lower toxicity score than another.
2. Significant Diferences in Specific Languages</p>
      <p>: Pairwise comparisons involving Dutch, English,
ilraanugB ltaaanC itroaanC zcehC isanhD tcuhD lisEnhg itsoaEnn iisFnnh rcFenh ilicaanG raenGm rkeeG iraannugH iIrsh liItaan itLvaan iitLaanunh ltsaeeM iroaengNw ilsPoh trsPoeeuug ioaannRm lkvoaS ilvoaennS isanhpS isehdSw iirkaannU</p>
      <p>French, German, Spanish, Swedish, and Ukrainian reveal significant diferences compared to
several languages from the first group (Greek, Italian, Polish, Portuguese, Romanian, Spanish,
Swedish and Ukrainian). These languages show distinct toxicity profiles, although notable
exceptions exist within this subset.
3. Group of Similar Toxicity: While Dutch, English, French, and German exhibit significant diferences
compared to other languages, they are not significantly diferent from one another, suggesting a
group of languages with similar, higher-than-average toxicity scores.
4. Highest Average Toxicity: Languages such as Catalan, Dutch, English, German, and Swedish
display the highest average toxicity scores, with Dutch, English, German, and Swedish identified
⟨1, , 2, | − 1|, |2 − |, |2 − 1|⟩ =</p>
      <p>(1)</p>
      <sec id="sec-6-1">
        <title>Pooling</title>
      </sec>
      <sec id="sec-6-2">
        <title>Encoder</title>
        <p>1
()</p>
      </sec>
      <sec id="sec-6-3">
        <title>Pooling</title>
      </sec>
      <sec id="sec-6-4">
        <title>Encoder</title>
        <p />
      </sec>
      <sec id="sec-6-5">
        <title>Tokenizer</title>
        <p>⟨1, , 2⟩
(2)</p>
      </sec>
      <sec id="sec-6-6">
        <title>Pooling</title>
      </sec>
      <sec id="sec-6-7">
        <title>Encoder</title>
        <p>2</p>
        <p>as significantly diferent in the majority of pairwise comparisons.</p>
        <p>Overall, the results highlight considerable variability in the toxicity of conversations across languages.
The distribution of toxicity scores varies significantly between some languages, suggesting that
conversational toxicity may be influenced by a range of factors, including cultural backgrounds, the structure
of the language itself, potential biases in toxicity tagging, or a combination of these influences.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. RQ5: Quality</title>
      <p>In this section, we assess the quality of the assistant’s responses, which we define as how well the
assistant meets the user’s needs and expectations. To the best of our knowledge, no large-scale multilingual
dataset exists with labels for assistant response quality, such as those used in RLHF [16]. This motivated
us to develop a custom architecture tailored for evaluating assistant responses in a multilingual setting.</p>
      <p>Our approach is inspired by the siamese architecture proposed in Sentence-BERT [11]. While the
original architecture encodes two inputs and trains a classification head that is later discarded to retain
only the fine-tuned encoder, we focus on training the classification head with three inputs to evaluate
the quality of assistant responses, the encoder remaining frozen. The classification task asks “Is the
user satisfied with the assistant’s response?”, with classes “Yes,” “No,” and “N/A” (not applicable).</p>
      <p>We structure the model to process ⟨1, , 2⟩ triples, where 1 is the user’s initial message,  is the
86
5
7
18
assistant’s response, and 2 is the user’s second message. The goal is to capture the semantics of the
user-assistant exchange and assess whether the assistant’s response satisfies the user’s original query.
For instance, if the user were to ask about the weather, the embedding of their first message may belong
to a subspace “weather”; if the assistant were to respond with the weather forecast, the embedding of
the assistant’s response could also belong to the same subspace. The user, having received a satisfactory
response, may then express their satisfaction in their second message, the embedding of which could
encode a positive sentiment. We expect the model to be able to capture the distances between the
embeddings of the messages (e.g., the assistant’s response addressing the user’s first message), as well
as the semantics of the messages themselves (e.g., the user’s satisfaction with the assistant’s response).</p>
      <p>Formally, our model takes a tokenized input of shape 3 × , where  is the maximum length of
the tokenized input (shorter sequences are padded), the three messages being stacked along a new
dimension. The model outputs a pooled representation () for each message in the triple, where
 ∈ 1, , 2, producing embeddings (1), (), and (2) of dimension  = 768.</p>
      <p>sentence representation = () ∈ R for  ∈ {1, , 2}
We concatenate these embeddings with their absolute diferences into a single vector :
 = concatenate((1), (), (2), |() − (1)|, |(2) − ()|, |(2) − (1)|) ∈ R6× 
We pass  through a fully connected feedforward neural network with four layers,  and  being
the weights and biases of layer , each with a hidden size of 884 and Mish [17] activation functions to
obtain the output logits :</p>
      <p>= 4 · Mish(3 · Mish(2 · Mish(1 ·  + 1) + 2) + 3) + 4 ∈ R3</p>
      <p>Finally, we apply an argmax operation on  to determine the class prediction. A diagram of
this architecture is shown in Figure 4. We used Optuna [18] to optimize the hyperparameters of the
classification head and training. Table 5 shows the search space and best values found.</p>
      <p>For training, we built a dataset of ⟨1, , 2⟩ triples by sampling up to 1000 conversations per language
with at least three messages. We manually annotated 1000 examples using the Argilla platform [19].
We also generated translations for non-English conversations to assist the annotation process using the
EuroLLM-1.7B model [20]. We did not assess the quality of the translations, as they were utilized solely
for contextual purposes, thereby facilitating a general comprehension of the discourse. Furthermore, we
established explicit guidelines to facilitate a uniform interpretation of the task. Our annotation process
maintained reasonable balance across languages, with an average of 37(33) annotations/language.
Specific counts and annotation guidelines are provided in the supplementary material.</p>
      <p>We trained the model using cross-entropy loss, optimized with AdamW [21] for 10 epochs, with
the default hyperparameters in the Hugging Face Transformers library [22],  1 = 0.9,  2 = 0.999,
and  = 1 × 10− 8. For the encoder, we utilized the paraphrase-multilingual-mpnet-base-v2
pretrained weights [11]. We reserved 10% of the data for validation and another 10% for testing,
(1)
(2)
(3)
performing 10-fold cross-validation. The model achieved a mean test accuracy of 0.820(45), significantly
outperforming both a random baseline (0.339(42)) and a majority class classifier ( 0.594(44)).</p>
      <p>When applying the model to the full dataset (raw numbers are reported in the supplementary
material), the results (Figure 5) reveal notable discrepancies in user satisfaction across linguistic groups.
Disregarding languages with a limited number of examples, where the results may lack
representativeness, we observe that the satisfaction of users with the assistant’s responses is generally high and tends
to be higher for users speaking languages belonging to the Romance language family [23], including
Italian, Portuguese, Spanish, Catalan, French, and Romanian. In contrast, users speaking languages
that originated in Eastern Europe, such as Bulgarian, Greek, and Hungarian, exhibit a lower level of
satisfaction with the assistant’s responses. English-speaking users demonstrate a satisfaction level that
is comparable to the overall mean, similar to that observed in German and Polish, although the latter
two languages exhibit a higher percentage of unsatisfied users.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Discussion</title>
      <p>Our analysis of the WildChat and LMSYS datasets reveals significant diferences in user-assistant
interaction across European languages. These findings contribute to a more nuanced understanding of
multilingual conversational AI and highlight the importance of considering linguistic diversity in the
development and evaluation of language models.</p>
      <p>The lack of clear clustering by language in our topic analysis (RQ1) suggests that users across
diferent languages engage with conversational assistants on a wide variety topics. This finding is
encouraging, as it indicates that the assistants are capable of handling a wide range of subjects across
multiple languages. However, it also emphasizes the need for language models to be equally proficient
in diverse topics across all supported languages.</p>
      <p>Our analysis of conversation length (RQ2) revealed a positive correlation between the number of user
words and the number of examples in a language. While further investigation is needed to understand
the underlying factors driving this correlation, it suggests that users in languages with more examples
may be more likely to engage in longer conversations, possibly due to a higher level of comfort or
familiarity with the conversational AI system or a better performance of the assistant in those languages.</p>
      <p>The sentiment analysis (RQ3) showed that both user and assistant messages tend to be neutral, with
assistant responses generally being either more positive or negative. This is a consistent result across
languages, and suggests that the current training approaches are efective in maintaining a coherent
tone across diferent linguistic contexts.</p>
      <p>Perhaps our most striking finding relates to toxicity ( RQ4). The significant diferences in toxicity
levels across languages highlight the need for language-specific approaches to content moderation
and toxicity detection. This is particularly important for languages like Dutch, English, German, and
Swedish, which also exhibited higher average scores.</p>
      <p>Finally, our quality analysis (RQ5) revealed considerable variations in user satisfaction across diferent
language groups. The higher satisfaction levels among Romance language speakers and lower levels
among Eastern European language speakers underscore the importance of tailoring conversational AI
to specific linguistic and cultural contexts.</p>
      <p>These findings collectively emphasize the need for a more nuanced, language-specific approach to
the development and evaluation of conversational AI. While current models show promise in their
ability to engage across multiple languages, there is still significant room for improvement in addressing
language-specific challenges and user expectations.</p>
    </sec>
    <sec id="sec-9">
      <title>9. Conclusion</title>
      <p>This study presents the first comprehensive analysis of diferences in user interaction and assistant
responses across a wide range of European-origin languages for the WildChat and LMSYS datasets.
Our work addresses a critical gap in existing literature, mainly treating these datasets as homogeneous.</p>
      <p>By examining topics, conversation length, sentiment, toxicity, and response quality, we have
uncovered significant variations across languages that have important implications for the development
and deployment of conversational AI systems. Our findings highlight the need for more nuanced,
language-specific approaches in areas such as content moderation and quality assessment.</p>
      <p>The insights gained from this study are crucial for ensuring that the perspectives and needs of
non-English speakers are adequately represented in the development of conversational AI. As the use
of these systems continues to grow globally, it is imperative that they are designed to provide equitable
and high-quality experiences across all languages.</p>
      <p>Future work should focus on developing language-specific strategies for improving conversational
AI, particularly in areas where we observed significant diferences, such as toxicity levels and user
satisfaction. Additionally, expanding this analysis to include non-European languages would provide a
more comprehensive global perspective on multilingual conversational AI.</p>
      <p>In conclusion, our work seeks to contribute to a more inclusive and efective approach to
conversational AI development, providing insight into the importance of linguistic diversity in creating truly
global and user-centric AI systems. We hope that these findings will inform and inspire future research
and development eforts in multilingual conversational AI, ultimately leading to more equitable and
efective language technologies for users worldwide.</p>
    </sec>
    <sec id="sec-10">
      <title>Supplementary materials</title>
      <p>Supplementary materials are available at https://github.com/ACMCMC/eur-langs-convs-analysis.
[9] L. Zheng, W.-L. Chiang, Y. Sheng, T. Li, S. Zhuang, Z. Wu, Y. Zhuang, Z. Li, Z. Lin, E. Xing, J. E.</p>
      <p>Gonzalez, I. Stoica, H. Zhang, LMSYS-Chat-1M: A Large-Scale Real-World LLM Conversation
Dataset, 2023. URL: https://openreview.net/forum?id=BOfDKxfwt0.
[10] Y. Deng, W. Zhao, J. Hessel, X. Ren, C. Cardie, Y. Choi, WildVis: Open Source Visualizer for
Million-Scale Chat Logs in the Wild, 2024. URL: http://arxiv.org/abs/2409.03753. doi:10.48550/
arXiv.2409.03753, arXiv:2409.03753 [cs].
[11] N. Reimers, I. Gurevych, Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks,
2019. URL: http://arxiv.org/abs/1908.10084, arXiv:1908.10084 [cs].
[12] K. R. Shahapure, C. Nicholas, Cluster Quality Analysis Using Silhouette Score, in: 2020 IEEE 7th
International Conference on Data Science and Advanced Analytics (DSAA), 2020, pp. 747–748.</p>
      <p>URL: https://ieeexplore.ieee.org/document/9260048. doi:10.1109/DSAA49011.2020.00096.
[13] F. Barbieri, L. Espinosa Anke, J. Camacho-Collados, XLM-T: Multilingual Language Models in
Twitter for Sentiment Analysis and Beyond, in: N. Calzolari, F. Béchet, P. Blache, K. Choukri,
C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, J. Odijk, S. Piperidis
(Eds.), Proceedings of the Thirteenth Language Resources and Evaluation Conference, European
Language Resources Association, Marseille, France, 2022, pp. 258–266. URL: https://aclanthology.
org/2022.lrec-1.27.
[14] Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli,
T. Henighan, N. Joseph, S. Kadavath, J. Kernion, T. Conerly, S. El-Showk, N. Elhage, Z.
HatfieldDodds, D. Hernandez, T. Hume, S. Johnston, S. Kravec, L. Lovitt, N. Nanda, C. Olsson, D. Amodei,
T. Brown, J. Clark, S. McCandlish, C. Olah, B. Mann, J. Kaplan, Training a Helpful and Harmless
Assistant with Reinforcement Learning from Human Feedback, 2022. URL: https://arxiv.org/abs/
2204.05862v1.
[15] B. Wen, J. Yao, S. Feng, C. Xu, Y. Tsvetkov, B. Howe, L. L. Wang, Know Your Limits: A Survey of
Abstention in Large Language Models, 2024. URL: http://arxiv.org/abs/2407.18418. doi:10.48550/
arXiv.2407.18418, arXiv:2407.18418 [cs].
[16] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama,
A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano,
J. Leike, R. Lowe, Training language models to follow instructions with human feedback, 2022.</p>
      <p>URL: http://arxiv.org/abs/2203.02155. doi:10.48550/arXiv.2203.02155, arXiv:2203.02155 [cs].
[17] D. Misra, Mish: A Self Regularized Non-Monotonic Activation Function, 2020. URL: https:
//www.bmvc2020-conference.com/conference/papers/paper_0928.html.
[18] T. Akiba, S. Sano, T. Yanase, T. Ohta, M. Koyama, Optuna: A Next-generation Hyperparameter</p>
      <p>Optimization Framework, 2019. URL: http://arxiv.org/abs/1907.10902, arXiv:1907.10902 [cs, stat].
[19] D. Vila-Suero, F. Aranda, Argilla - Open-source framework for data-centric NLP, 2023. URL:
https://github.com/argilla-io/argilla.
[20] P. H. Martins, P. Fernandes, J. Alves, N. M. Guerreiro, R. Rei, D. M. Alves, J. Pombal, A. Farajian,
M. Faysse, M. Klimaszewski, P. Colombo, B. Haddow, J. G. C. de Souza, A. Birch, A. F. T. Martins,
EuroLLM: Multilingual Language Models for Europe, 2024. URL: https://arxiv.org/abs/2409.16235v1.
[21] I. Loshchilov, F. Hutter, Decoupled Weight Decay Regularization, 2017. URL: https://arxiv.org/abs/
1711.05101v3.
[22] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M.
Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S.
Gugger, M. Drame, Q. Lhoest, A. M. Rush, HuggingFace’s Transformers: State-of-the-art Natural
Language Processing, 2020. URL: http://arxiv.org/abs/1910.03771. doi:10.48550/arXiv.1910.
03771, arXiv:1910.03771 [cs].
[23] W. Heeringa, C. Gooskens, V. J. van Heuven, Comparing Germanic, Romance and Slavic:
Relationships among linguistic distances, Lingua 287 (2023) 103512. URL: https://www.sciencedirect.com/
science/article/pii/S0024384123000360. doi:10.1016/j.lingua.2023.103512.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          , Attention is All you Need,
          <source>in: Advances in Neural Information Processing Systems</source>
          , volume
          <volume>30</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2017</year>
          . URL: https://papers.nips.cc/paper_files/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Casheekar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lahiri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. S.</given-names>
            <surname>Prabhakar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Srinivasan</surname>
          </string-name>
          ,
          <article-title>A contemporary review on chatbots, AI-powered virtual conversational agents, ChatGPT: Applications, open challenges and future research directions</article-title>
          ,
          <source>Computer Science Review</source>
          <volume>52</volume>
          (
          <year>2024</year>
          )
          <article-title>100632</article-title>
          . URL: https://www.sciencedirect. com/science/article/pii/S1574013724000169. doi:
          <volume>10</volume>
          .1016/j.cosrev.
          <year>2024</year>
          .
          <volume>100632</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Katragadda</surname>
          </string-name>
          ,
          <article-title>Automating Customer Support: A Study on the Eficacy of Machine Learning-Driven Chatbots and Virtual Assistants, IRE Journals 7 (</article-title>
          <year>2023</year>
          )
          <fpage>600</fpage>
          -
          <lpage>610</lpage>
          . URL: https://www.irejournals.com/.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kohnke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Moorhouse</surname>
          </string-name>
          , D. Zou,
          <article-title>ChatGPT for Language Teaching and Learning</article-title>
          ,
          <source>RELC Journal 54</source>
          (
          <year>2023</year>
          )
          <fpage>537</fpage>
          -
          <lpage>550</lpage>
          . URL: http://journals.sagepub.com/doi/10.1177/00336882231162868. doi:
          <volume>10</volume>
          .1177/00336882231162868.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Liu</surname>
          </string-name>
          , Q.-L. Han,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development</article-title>
          ,
          <source>IEEE/CAA Journal of Automatica Sinica</source>
          <volume>10</volume>
          (
          <year>2023</year>
          )
          <fpage>1122</fpage>
          -
          <lpage>1136</lpage>
          . URL: https://www.ieee-jas.net/en/article/doi/10.1109/JAS.
          <year>2023</year>
          .
          <volume>123618</volume>
          . doi:
          <volume>10</volume>
          .1109/ JAS.
          <year>2023</year>
          .
          <volume>123618</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Razumovskaia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Glavas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Majewska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Ponti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Korhonen</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Vulic</surname>
          </string-name>
          ,
          <article-title>Crossing the Conversational Chasm: A Primer on Natural Language Processing for Multilingual Task-Oriented Dialogue Systems</article-title>
          ,
          <source>Journal of Artificial Intelligence Research</source>
          <volume>74</volume>
          (
          <year>2022</year>
          )
          <fpage>1351</fpage>
          -
          <lpage>1402</lpage>
          . URL: http: //www.jair.org/index.php/jair/article/view/13083. doi:
          <volume>10</volume>
          .1613/jair.1.13083.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Park</surname>
          </string-name>
          , AI Chatbots and
          <article-title>Linguistic Injustice</article-title>
          ,
          <source>Journal of Universal Language</source>
          <volume>25</volume>
          (
          <year>2024</year>
          )
          <fpage>99</fpage>
          -
          <lpage>119</lpage>
          . URL: http://www.sejongjul.org/archive/view_article?doi=10.22425/jul.
          <year>2024</year>
          .
          <volume>25</volume>
          .1.99. doi:
          <volume>10</volume>
          .22425/jul.
          <year>2024</year>
          .
          <volume>25</volume>
          .1.99.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hessel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cardie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          , Y. Deng,
          <source>WildChat: 1M ChatGPT Interaction Logs in the Wild</source>
          ,
          <year>2023</year>
          . URL: https://openreview.net/forum?id=
          <fpage>Bl8u7ZRlbM</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>