<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.18653/v1/2022.emnlp</article-id>
      <title-group>
        <article-title>Small Language Models and Large Language Models in Oppositional Thinking Analysis: Capabilities, Biases and Challenges</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Álvaro Huertas-García</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Carlos Martí-González</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Javier Muñoz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Enrique De Miguel Ambite</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer System Engineering, Polytechnic University of Madrid</institution>
          ,
          <addr-line>Calle de Alan Turing, 28031, Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Fundación Tecnológica Advantx - Funditec</institution>
          ,
          <addr-line>Paseo de la Castellana, 28046 , Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>13113</volume>
      <fpage>09</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>The proliferation of misinformation and conspiracy theories needs robust methods to diferentiate legitimate critical discourse from harmful conspiratorial narratives. This study investigates discerning critical messages from conspiracy theories within COVID-19 discussions on Telegram. Preserving information integrity on social media impacts vital public discourse on health, politics, and science. The research employs two distinct approaches: linguistic style classification and contextual knowledge classification. The former leverages a diverse ensemble of Small Language Models (SLMs), Large Language Models (LLMs), and State-Space Models (SSMs), while the latter harnesses the capabilities of the Claude 2.0 Opus model for contextual analysis. Empirical evaluations demonstrate that the SLM models using Matryoshka embedding and Mamba (SSM) models exhibit superior performance for the English language dataset, achieving a Matthews Correlation Coeficient (MCC) of 0.793. For the Spanish dataset, the Spanish BERT baseline (SLM) attains an MCC of 0.699. Notably, a multilingual model trained on a balanced combination of English and Spanish data outperforms its monolingual counterparts, with the multilingual-e5-large model (LLM) achieving an MCC of 0.768 for English and 0.725 for Spanish. This finding underscores the potential of multilingual models to mitigate the ”curse of multilinguality,” where performance often degrades on low-resource languages. However, the suboptimal performance of the Claude 2.0 Opus model, exhibiting a tendency to classify texts as conspiracy-related, highlights inherent biases that require further investigation. Overall, this study contributes to the development of advanced models that can efectively diferentiate critical thinking from conspiratorial narratives in various linguistic contexts. Future research should prioritize identifying and addressing biases in large language models to ensure fair treatment of diverse perspectives, as well as to preserve freedom of expression and ensure fair representation of narratives.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;PAN 2024</kwd>
        <kwd>Oppositional Thinking Analysis</kwd>
        <kwd>Transformers</kwd>
        <kwd>Mamba</kwd>
        <kwd>LLM</kwd>
        <kwd>Claude</kwd>
        <kwd>Bias</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The proliferation of misinformation and conspiracy theories has become a significant challenge in
today’s digital age, impacting vital aspects of public discourse such as health, politics, and scientific
discourse [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Conspiracy beliefs can shape human behaviour and decision-making processes,
making understanding the cognitive styles and personality traits associated with such beliefs is crucial.
Extensive psychological research has identified numerous predictors of conspiracy beliefs, including
personality factors like low agreeableness and high openness to experience [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Moreover, studies on
cognitive styles have revealed a correlation between belief in conspiracy theories and lower analytic
thinking coupled with higher intuitive thinking [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Understanding these human aspects of conspiracy beliefs is not merely an academic exercise; it
has far-reaching implications. This knowledge can inform behavioural interventions to mitigate the
spread of misinformation [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Furthermore, integrating cognitive and personality factors into natural
language processing (NLP) models can enhance their accuracy in distinguishing between critical and
conspiratorial narratives, ultimately improving their performance and reliability.
      </p>
      <p>In the realm of automatic content moderation, the challenge of distinguishing between conspiracy
theories and critical thinking in NLP models has emerged as a vital area of study. The prevalence of
conspiratorial content has escalated the need for robust methodologies that can accurately diferentiate
between legitimate critical discourse and harmful conspiracy narratives. Maintaining the integrity of
information shared across social media platforms and other digital forums is crucial for preserving the
credibility of public discourse.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>
        While the focus on this topic remains relatively limited, related studies provide valuable insights
into methodologies and applications in adjacent areas. For instance, a significant contribution by [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
presents a framework for detecting conspiracy theories on Twitter using a novel recurrent model called
BORJIS, highlighting the eficacy and challenges of NLP techniques in identifying that conspiratorial
content is often found within vast amounts of social media data. Similarly, [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] explored fake news
detection related to COVID-19 and 5G conspiracy theories using BERT embeddings and Graph Neural
Networks, showcasing advanced NLP techniques for distinguishing misinformation from legitimate
critical analysis.
      </p>
      <p>
        Other studies follow a diferent approach focusing on tracking the spread across social networks,
such as the FacTeR-Check semi-automated fact-checking tool that uses semantic similarity and natural
language inference (NLI) to monitor the evolution of misinformation or disinformation on online social
networks [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Additionally, the use of camouflage for content evasion has also been reported, and works
have developed multilingual NER NLP models to counter these strategies, like the ”pyleetspeak” tool
for simulating word camouflage and a NER Transformer model for its detection [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Furthermore, the research conducted by [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] explores the potential of NLP techniques in fostering
critical thinking skills within educational settings. It ofers valuable insights into the systematic
instruction and assessment of critical thinking, specifically in comparison to conspiratorial thinking.
Finally, [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] emphasize the importance of explicit theorization in developing models that can accurately
diferentiate between critical and conspiratorial thinking in their paper on gender bias in NLP research.
      </p>
      <p>While significant progress has been made in the field, there is still much to explore in analyzing
oppositional thinking using NLP. This research article addresses this issue in both English and Spanish,
contributing to the development of more sophisticated NLP systems for real-world scenarios.</p>
      <sec id="sec-2-1">
        <title>2.1. Competition Description</title>
        <p>
          The competition, titled “Oppositional Thinking Analysis: Conspiracy vs Critical Narratives” is part of
the PAN at CLEF 2024 event [
          <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
          ]. Our focus is on the first subtask, which involves analyzing texts
from the Telegram platform related to the COVID-19 pandemic. The objective is to perform a binary
classification to diferentiate between two types of narratives:
• Critical comment: Messages that question major decisions in the public health domain without
promoting a conspiracist mentality. These are critical opinions based on information that may
not be commonly accepted but do not imply secret plots or malevolent intentions.
• Conspiracy comment: Messages that portray the pandemic or public health decisions as results
of malevolent conspiracies by secret, influential groups. These messages often encourage distrust
based on unverified or poorly explained evidence.
        </p>
        <p>The oficial evaluation metric for this subtask is the Matthews Correlation Coeficient (MCC). MCC
is a measure of the quality of binary classifications, providing a balanced evaluation even when the</p>
        <p>CONSPIRACY</p>
        <p>CRITICAL
Category
classes are of very diferent sizes. It is normalized, making it applicable to other datasets and ensuring
robust performance assessment across diverse scenarios.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>Our methodology is based on two main approaches: one that classifies texts according to their linguistic
style and content, which we refer to as the Linguistic Style Classification Approach ; and another
that uses the input text, combined with contextual knowledge and reasoning from large language
models (LLMs), referred to as the Contextual Knowledge Classification Approach .</p>
      <sec id="sec-3-1">
        <title>3.1. Linguistic Style Classification Approach</title>
        <sec id="sec-3-1-1">
          <title>3.1.1. Dataset Preprocessing</title>
          <p>The dataset comprises texts in both English and Spanish, categorized into CRITICAL and CONSPIRACY
narratives. The English dataset consists of 2,621 CRITICAL texts and 1,379 CONSPIRACY texts. The
Spanish dataset includes 2,538 CRITICAL texts and 1,462 CONSPIRACY texts. Both datasets were
divided into training (80%) and validation (20%) sets using a random seed of 42.</p>
          <p>The preprocessing involved analyzing the prevalence of URLs, emojis, and text length distributions.
URLs were removed to standardize the text data. The text length distributions for the English dataset
were found to be 743±740 characters for CONSPIRACY and 476±479 characters for CRITICAL. For the
Spanish dataset, the distributions were 1112±946 characters for CONSPIRACY and 641±577 characters
for CRITICAL. These distributions are illustrated in Figure 1.
3.1.2. Models
We employed a diverse range of models, both monolingual and multilingual. Except for Mamba, all
models are based on the Transformer architecture. The significance of Transformer models lies in their
attention mechanism, which allows them to eficiently handle dependencies in long sequences and
capture intricate patterns within the data. According to Vaswani et al.[11], the self-attention mechanism
of Transformers enables them to dynamically weigh the importance of diferent tokens in a sequence,
making them highly efective for various NLP tasks. Mamba, in contrast, is an advanced state-space
model (SSM) designed for eficient handling of complex sequences with large datasets, as detailed by
Gu and Dao[12].</p>
          <p>Below, we list the models used in our research along with brief descriptions:
Monolingual
• BERT-base-uncased[13]: A foundational model that efectively applies Transformers at scale,
expanding our understanding of linguistic context. We selected the largest variant to ensure a
comprehensive analysis and to compare historical model design evolution.
• DistilBERT1: A compact version of BERT by Hugging Face, ofering a smaller and faster
alternative while maintaining similar performance. Suitable for various NLP tasks.
• Nomic: Nomic Embed [14] innovates in embedding techniques to provide dynamic,
contextaware representations, surpassing leading models as of February 2024. With a compact size, low
memory usage, and advanced training methods, Nomic Embed eficiently processes up to 8192
tokens, making it ideal for analyzing extensive online materials.
• DistilRoBERTa[15]: A faster and smaller version of RoBERTa, trained on the same corpus in a
self-supervised manner using BERT as a teacher.
• twitter-roberta-base-sentiment-latest[16]: A RoBERTa-base model fine-tuned for sentiment
analysis using tweets from January 2018 to December 2021, benchmarked with TweetEval.
• all-MiniLM-L6-v22: A Transformer model trained with contrastive loss on 1B sentence pairs
to encode sentences and short paragraphs into a dense vector space of 384 dimensions, suitable
for tasks like clustering or semantic search.
• mxbai-embed-large-v1[17]: A powerful English embedding model known for its eficient size
and high performance. Using Matryoshka Embedding [18], it trains hidden layers to generate
high-quality embeddings independently of higher layers, reducing both the number of layers and
embedding dimensions. Ranked in the top 25 on the MTEB leaderboard3 for sentence embedding
tasks, it outperforms commercial models like OpenAI’s text-embedding-3-large, making it a top
choice for our research.
• Mamba4 [12]: An advanced state-space model designed for eficient handling of complex
sequences with large datasets. It uses a selection mechanism to decide whether to propagate or
discard information based on token relevance, providing a viable method for assessing intricate
controversial and critical comments on social media.
• dccuchile/bert-base-spanish-wwm-uncased [19]: Also known as BETO, this is a BERT model
trained on a large Spanish corpus using a vocabulary of about 31k BPE subwords constructed
with SentencePiece.</p>
          <p>Multilingual
• XLM-RoBERTa: A scaled cross-lingual multilingual sentence encoder version of the RoBERTa
model, trained on 2.5TB of data across 100 languages filtered from Common Crawl.
• LLAMA 2: A family of pre-trained and fine-tuned large language models (LLMs) by Meta AI,
useful for various research and commercial purposes.
• multilingual-e5: Developed at Microsoft, this sophisticated embedding model excels in tasks
requiring robust text representation, such as information retrieval, semantic textual similarity,
and text reranking. Initialized from xlm-roberta-large, it is continually trained on a mixture of
multilingual datasets, supporting 100 languages from xlm-roberta with potential performance
degradation for low-resource languages.</p>
          <p>In this style-based strategy, all these models are employed as the encoder body of the texts to which
a layer of 1024 classifier neurons is added.
1https://huggingface.co/distilbert/distilbert-base-uncased
2https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
3https://huggingface.co/spaces/mteb/leaderboard
4https://huggingface.co/state-spaces/mamba-370m-hf</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.1.3. Hyperparameter Tuning, Importance, and Correlation</title>
          <p>For hyperparameter tuning, we used Bayesian optimization, which leverages prior evaluations to guide
its search process, enhancing model performance. Table 1 lists the explored hyperparameters,
including their ranges and sampling distributions.</p>
          <p>We analyzed the importance and correlation of hyperparameters with the Matthews Correlation
Coeficient (MCC). Correlation measures the linear relationship between hyperparameters and MCC,
indicating how changes in hyperparameters afect performance.</p>
          <p>Additionally, we calculated an importance metric exploiting the feature importance of a random
forest model, based on the idea that the more important features appear more often in the trees of
the forest. Hyperparameters served as input features, with MCC as the target output. This provided
feature importance values, showing each hyperparameter’s contribution to predicting performance.
These analyses ofer insights into how hyperparameters influence model performance.</p>
          <p>The experimental tracking can be consulted in Weight and Biases5</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Contextual Knowledge Classification Approach</title>
        <p>For this approach, we utilized the Claude 2.0 Opus model for zero-shot classification. This approach
relies on prompt engineering using tempeareture equals to 1 and without fine-tuning the model,
leveraging the extensive knowledge of language and context up to early 2023. Additionally, its multilingual
capability is well-suited to this task, as approximately 10% of the data used was non-English, according
to Anthropic. This model was accessed via Anthropic’s public API before May 6.</p>
        <p>Below is the final prompt used for the Zero-shot classification:
Claude 2.0 Opus Prompt
Your role is to analyze text inputs to identify whether they represent critical commentary or
conspiracy theories, each with distinct characteristics:
Critical Commentary:
Definition: Critical messages that question major decisions in the public health domain, but do
not promote a conspiracist mentality. It is an opinion, it may not be correct but do not consider
that the revendication belongs to a secret or a plot against the population in terms of influential
groups. It can be a critic based on information that may not be the common opinion, and it
could be wrong, but it is expressing a point of view that another can criticize.</p>
        <p>Characteristics: Applicability: Applies even when the topic might be susceptible to
conspiratorial interpretations.</p>
        <p>Conspiracy Commentary:</p>
        <p>Definition: Messages that view the pandemic or public health decisions as a result of a
malev5https://wandb.ai/huertas_97/PAN_2024_Opposing/workspace
olent conspiracy by secret, influential groups. It can be an opinion but the main problem is that
it tries to convince you to distrust based on evidences that are not well trusted or explained and
leave open the door to be distrustful instead of being critical based on information that may not
be the common opinion.</p>
        <p>Characteristics:
• Suspicion and Paranoia: Thrives on distrust of oficial narratives and institutions.
• Simplistic Explanations: Oversimplifies complexities by attributing them to the actions
of a few.</p>
        <p>• Resistance to Evidence: Dismisses contrary evidence as part of the cover-up.</p>
        <p>You are required to utilize web browser research extensively to verify claims and gather context
before making your classification. Be sure to adhere strictly to the output format, especially in
reporting URLs used in your research to ensure transparency and accountability.</p>
        <p>Additional Instruction:
• Always use a web browser to search for information related to the text. Your classification
should be informed by credible online sources. Include URLs of these sources in your
explanation to validate your findings and reasoning.
• Maintain neutrality in your classification process. Do not classify a text as
”CONSPIRACY” solely because the topic is related to commonly misunderstood or hot-button issues.
Instead, use clear evidence from the text and supporting information from web searches
to distinguish between critical perspectives and actual conspiracy theories. Include URLs
of these sources in your explanation to validate your findings and reasoning.</p>
        <p>Task Requirements:
• Classify the narrative of the text based on the categories above.
• Determine the main topics of the text in 2-3 words.
• Assign a Confidence Score from 0 to 1, indicating the certainty of your classification.
• Your explanation must reflect how the information sourced online influenced your
classiifcation and must include URLs for verification.</p>
        <p>Output Format: (It is crucial that the output strictly follows this format)
{
}
"Prediction": "CATEGORY_NAME",
"Confidence": [Confidence Score],
"Topic": ["topic1", "topic2"],
"Reason": "A concise explanation based on the</p>
        <p>characteristics with URLs of the sources used."</p>
        <p>It is crucial that the output strictly follows this format.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments and Results</title>
      <sec id="sec-4-1">
        <title>4.1. Training and Developing Results</title>
        <p>This section presents the results obtained from developing and evaluating various models. As shown
in Table 2, the best monolingual model for English are mxbai-embed-large-v1 and Mamba 370m model,
achieving an MCC of 0.793, while the best model for Spanish is bert-base-spanish-wwm-uncased, with
an MCC of 0.699.</p>
        <p>As shown in Table 2, the superior performance of the mxbai-embed-large-v1 model underscores
its efectiveness in encoding texts into embeddings. This model employs the Matryoshka Embedding
technique [18], where each layer is trained to produce high-quality embeddings independently, thus
enhancing the model’s overall performance. This approach contrasts sharply with the performance of
larger models such as Llama 2 [20], which, despite having over 7 billion parameters, underperforms
when a classifier head is added. This observation corroborates the notion that model size does not
necessarily correlate with task-specific performance. Optimizing the model architecture to improve
linguistic encoding, as demonstrated by mxbai-embed-large-v1, proves more beneficial than merely
increasing the number of parameters.</p>
        <p>Additionally, the performance of the Mamba 370m model, which matches mxbai-embed-large-v1
with an MCC of 0.793, highlights the potential of alternative architectures beyond Transformers. The
Mamba model, with its state-space approach and selective propagation mechanism, presents a
compelling case for further exploration of non-Transformer architectures in NLP tasks.</p>
        <p>The horrible performance of the large language model Claude 2.0 Opus6 in English, which is
typically a benchmark model for complex reasoning tasks, warrants further investigation. Despite its
state-of-the-art status in reasoning datasets, Claude 2.0 Opus showed a tendency to classify texts as
conspiracy-related. This bias was evident even when the model provided reasoning for its
classifications, suggesting a predisposition influenced by the sensitive nature of the subject matter. This finding
highlights the need for ongoing research into model biases and their impact on classification tasks,
particularly for topics with significant socio-cultural implications such as conspiracy theories.</p>
        <p>In the Spanish dataset (see Table 3, the bert-base-spanish-wwm-uncased model achieved lower
performance compared to its English counterparts, indicating potential limitations in the Spanish training
data or model architecture. However, when using the multilingual model multilingual-e5-large, which
was trained on Spanish data alone, still not surpassing the monolingual model. This suggests that
in this context, monolingual models might be more efective than multilingual ones just using one
language data for training.</p>
        <p>Interestingly, when the multilingual model was trained on both English and Spanish datasets, it
achieved an MCC of 0.725, as shown in Table 4. This indicates that multilingual models can leverage
larger and more diverse datasets to enhance their understanding of the task. The ability of
multilingual models to generalize across languages is particularly evident when they are exposed to substantial
amounts of well-represented data, demonstrating their potential to exploit linguistic diversity for
improved performance.</p>
        <p>Overall, we select the following models for the two trial of the competition:
• RUN 1 - Monolingual approach consists of Mamba 370m model for English and
bert-base-spanishwwm-uncased model for Spanish.
• RUN 2 - Multilingual approach consists of multilingual model multilingual-e5-large, trained in
both languages together.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Parameters Tuning, Importance and Correlation</title>
        <p>The results of our hyperparameter tuning highlight the significant influence of learning rate (lr) on
model performance, with an importance score of 0.531 and a negative correlation of -0.535 with MCC.
This suggests that optimizing the learning rate is crucial for achieving higher performance, as
inappropriate values can lead to suboptimal results. Runtime also showed considerable importance (0.196)
and a positive correlation (0.410), indicating that longer training times generally improve model
performance.</p>
        <p>Weight decay and sigmoid focal loss parameters, while less influential than the learning rate, still
play vital roles. The weight decay parameter had an importance of 0.163 and a negative correlation of
-0.323, suggesting that higher weight decay might adversely afect the model. Sigmoid focal loss [ 21]
parameters (alpha and gamma) demonstrated moderate importance, with alpha showing a positive
correlation (0.156) and gamma a negative one (-0.325). The focal loss function, designed to address
class imbalance, is given by:</p>
        <p>FL(  ) = −  (1 −   ) log(  )
where   is the model’s estimated probability for the true class label,   is a weighting factor for
class imbalance, and  is a focusing parameter that adjusts the rate at which easy examples are
downweighted. This indicates a complex relationship where these parameters can be fine-tuned to balance
the model’s sensitivity to class imbalances efectively.</p>
        <p>Other parameters, such as epochs, batch size, and accumulation steps, showed lower importance
scores. Interestingly, batch size had a positive correlation with MCC (0.287), indicating that larger
batch sizes might contribute to better performance. However, the relatively low importance scores
for these parameters suggest that while they do influence performance, their impact is less critical
compared to the learning rate and regularization parameters.</p>
        <p>Parameter Importance and Correlation with MCC performance
Importance
Correlation
lr</p>
        <p>Runtime
weight_decay</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Oficial Competition Results and Conclusion</title>
      <p>Table 5 presents the test results, demonstrating that Run 2, which employs a single multilingual model,
outperforms the monolingual models. Notably, in both languages, the baseline performance of BERT
(MCC 0.7964) is exceeded. Out of 82 teams, only 17 surpass this threshold. Particularly striking is the
performance in Spanish, where only 13 teams exceed the MCC threshold of 0.6681, placing us in the
top three.</p>
      <p>
        These results highlight the potential advantages of multilingual models in achieving robust
performance across languages. The improved performance in Run 2 suggests that training a multilingual
model on data from both languages mitigates the so-called “curse of multilinguality”, where
multilingual models often struggle to distribute their knowledge equally across all languages. This
phenomenon has been documented in the literature, where multilingual models tend to underperform on
low-resource languages due to an imbalance in data distribution and representation [22, 23]. Our
findings supports that providing a balanced dataset across languages can significantly enhance the fairness
and efectiveness of multilingual models, as other works have applied these to counter content evasion
on social media platforms [
        <xref ref-type="bibr" rid="ref6">6, 24</xref>
        ].
      </p>
      <p>Furthermore, this study underscores the importance of addressing biases in large language models
(LLMs). The bias observed in the Claude 2.0 Opus model, which showed a tendency to classify texts
as conspiracy-related, raises critical questions about the ethical deployment of AI technologies. Such
biases can have profound implications for freedom of expression and the equitable treatment of diverse
perspectives. Future research should focus on developing techniques to identify and mitigate these
biases, ensuring that LLMs operate fairly across diferent socio-cultural contexts.</p>
      <p>In conclusion, our findings support the utility of multilingual models in handling diverse linguistic
data, provided that training data is well-distributed across languages. The competition has allow us
to conduct a research that demonstrate the potential of such models in achieving high performance
and also emphasizes the necessity of continuous eforts to address and mitigate inherent biases in AI
systems. Moving forward, it is essential to explore advanced methodologies for bias detection and
mitigation, which will be crucial for the ethical and efective application of AI technologies in
realworld scenarios.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Goreis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Voracek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Systematic</given-names>
            <surname>Review</surname>
          </string-name>
          and
          <article-title>Meta-Analysis of Psychological Research on Conspiracy Beliefs: Field Characteristics, Measurement Instruments, and Associations With Personality Traits, Frontiers in Psychology 10 (</article-title>
          <year>2019</year>
          )
          <article-title>205</article-title>
          . URL: https://www.frontiersin.org/article/ 10.3389/fpsyg.
          <year>2019</year>
          .00205/full. doi:
          <volume>10</volume>
          .3389/fpsyg.
          <year>2019</year>
          .
          <volume>00205</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Gjoneska</surname>
          </string-name>
          ,
          <article-title>Conspiratorial Beliefs and Cognitive Styles: An Integrated Look on Analytic Thinking, Critical Thinking, and Scientific Reasoning in Relation to (Dis)trust in Conspiracy Theories</article-title>
          , Frontiers in Psychology 12 (
          <year>2021</year>
          )
          <article-title>736838</article-title>
          . URL: https://www.frontiersin.org/articles/10.3389/ fpsyg.
          <year>2021</year>
          .736838/full. doi:
          <volume>10</volume>
          .3389/fpsyg.
          <year>2021</year>
          .
          <volume>736838</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Martín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huertas-Tato</surname>
          </string-name>
          , Álvaro Huertas-García,
          <string-name>
            <given-names>G.</given-names>
            <surname>Villar-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Camacho</surname>
          </string-name>
          , Factercheck:
          <article-title>Semi-automated fact-checking through semantic similarity and natural language inference</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>251</volume>
          (
          <year>2022</year>
          )
          <article-title>109265</article-title>
          . doi:https://doi.org/10.1016/j. knosys.
          <year>2022</year>
          .
          <volume>109265</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Galende</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hernández-Peñaloza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Uribe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. A.</given-names>
            <surname>García</surname>
          </string-name>
          ,
          <article-title>Conspiracy or not? a deep learning approach to spot it on twitter</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>38370</fpage>
          -
          <lpage>38378</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2022</year>
          .
          <volume>3165226</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hamid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shiekh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Said</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hassan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Al-Fuqaha</surname>
          </string-name>
          ,
          <article-title>Fake news detection in social media using graph neural networks and nlp techniques: A covid-</article-title>
          19
          <source>use-case</source>
          ,
          <year>2020</year>
          . arXiv:
          <year>2012</year>
          .07517.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Á.</given-names>
            <surname>Huertas-García</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Martín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huertas-Tato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Camacho</surname>
          </string-name>
          ,
          <article-title>Countering malicious content moderation evasion in online social networks: Simulation and detection of word camouflage</article-title>
          ,
          <source>Applied Soft Computing</source>
          <volume>145</volume>
          (
          <year>2023</year>
          )
          <article-title>110552</article-title>
          . doi: https://doi.org/10.1016/j.asoc.
          <year>2023</year>
          .
          <volume>110552</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Editor</given-names>
            <surname>Pje</surname>
          </string-name>
          ,
          <article-title>Nourishing critical thinking skills using neuro-linguistic programming: farah hashmi</article-title>
          ,
          <source>PJE</source>
          <volume>39</volume>
          (
          <year>2023</year>
          ). URL: https://ojs.aiou.edu.pk/index.php/pje/article/view/865. doi:
          <volume>10</volume>
          .30971/pje. v39i1.
          <fpage>865</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Devinney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Björklund</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Björklund</surname>
          </string-name>
          , Theories of ”gender” in
          <source>nlp bias research</source>
          ,
          <year>2022</year>
          . arXiv:
          <volume>2205</volume>
          .
          <fpage>02526</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. B.</given-names>
            <surname>Casals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chulvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dementieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Elnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Freitag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Korenčić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Panchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rangel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Smirnova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Stamatatos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Taulé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ustalov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegmann</surname>
          </string-name>
          , E. Zangerle,
          <article-title>Overview of PAN 2024: Multi-Author Writing Style Analysis, Multilingual Text Detoxification, Oppositional Thinking Analysis, and Generative AI Authorship Verification, in: Experimental IR Meets Multilinguality, Multimodality, and Interaction</article-title>
          .
          <source>Proceedings of the Fourteenth International Conference of the CLEF Association (CLEF</source>
          <year>2024</year>
          ), Lecture Notes in Computer Science, Springer, Berlin Heidelberg New York,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bevendorf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X. B.</given-names>
            <surname>Casals</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Chulvi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dementieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Elnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Freitag</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fröbe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Korenčić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mayerl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mukherjee</surname>
          </string-name>
          , et al.,
          <source>Overview of pan</source>
          <year>2024</year>
          <article-title>: multi-author writing style analysis, multilingual text detoxification, oppositional thinking analysis, and generative ai authorship verification</article-title>
          ,
          <source>in: European Conference on Information Retrieval</source>
          , Springer,
          <year>2024</year>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>10</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>