<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>TIFIN at CheckThat! 2025: Cross-Lingual Subjectivity Classification in News through Monolingual, Multilingual, and Zero-Shot Learning⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kishan Gurumurthy</string-name>
          <email>Kishan.gurumurthy@workifi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ashish Shrivastava</string-name>
          <email>Ashish.shrivastava@workifi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pawan Kumar Rajpoot</string-name>
          <email>pawan@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Prasanna Devadiga</string-name>
          <email>prasanna@askmyfi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bharatdeep Hazarika</string-name>
          <email>bharatdeep@askmyfi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manish Jain</string-name>
          <email>manish.jain@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manan Sharma</string-name>
          <email>manan.sharma@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arya Suneesh</string-name>
          <email>arya.suneesh@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anshuman B Suresh</string-name>
          <email>anshuman.suresh@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aditya U Baliga</string-name>
          <email>aditya@askmyfi.com</email>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>In an age of widespread digital misinformation, the process of binary classification to diferentiate subjective claims from objective reporting is crucial for building eficient automated fact-checking systems. This paper presents our approach for Task 1 of the CLEF 2025 CheckThat! Lab, which requires classifying text segments as either subjective or objective. The evaluation spans three settings-monolingual, multilingual, and zero-shot cross-lingual transfer-across five languages: Arabic, Bulgarian, English, German, and Italian. Our method leverages pretrained transformer-based language models that are fine-tuned specifically for subjectivity detection, with adaptations designed to enhance performance in multilingual and cross-lingual contexts. To address the issue of class imbalance present in the training data, we incorporate resampling and class-weighting techniques during model training, which significantly improve the identification of less frequent classes. Experimental results show consistent and strong performance across all evaluation settings, particularly in scenarios involving limited resources and unseen languages. Additionally, comprehensive error analysis is conducted to explore linguistic and contextual influences on classification accuracy. These results demonstrate the importance of robust multilingual modeling approaches in subjectivity detection and their contribution to advancing automated fact-checking and the promotion of reliable information dissemination.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;subjectivity classification</kwd>
        <kwd>fact-checking automation</kwd>
        <kwd>multilingual modeling</kwd>
        <kwd>cross-lingual generalization</kwd>
        <kwd>class imbalance mitigation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In computational linguistics, the distinction between subjective and objective language plays a pivotal
role in various natural language processing (NLP) tasks and applications [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Subjectivity refers
to the personal opinions, beliefs, and emotions, while objectivity denotes factual reporting devoid of
personal bias or interpretation [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. The subjectivity detection task, in the context of news articles, is
a binary classification task that has garnered significant attention in recent years [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. This paper
discusses our participation in the CLEF 2025 Task 1: Subjectivity Detection, a competition aimed at
advancing methodologies for distinguishing between subjective (SUBJ) and objective (OBJ) sentences
across three distinct settings: monolingual, multilingual, and zero-shot cross-lingual transfer.
The delineation between subjective and objective language is not merely a linguistic exercise;
it is a fundamental challenge in NLP that has far-reaching implications for how information is processed
and understood [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. Subjective sentences often contain evaluative language, emotional undertones,
and personal perspectives, making them inherently ambiguous and context-dependent [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ]. For
instance, a statement like "The film was thrilling" embodies a subjective viewpoint, colored by the
speaker’s personal experience and emotions. In contrast, an objective sentence such as "The film
was released in 2023" presents verifiable information devoid of personal sentiment [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Automated
subjectivity detection is crucial for various reasons [12, 13]. First, it serves as a foundational step
in sentiment analysis, where distinguishing between subjective and objective content is essential to
accurately assess public sentiment [14, 15, 16]. Furthermore, the inherent ambiguity of subjective
language poses challenges for computational models, which must be equipped to navigate the
complexities of context and nuance [17, 18]. This complexity is exacerbated in multilingual contexts,
where cultural variations influence how subjectivity is expressed [ 19, 20]. Diferent languages may
employ distinct syntactic structures, lexical choices, and idiomatic expressions to convey subjective
nuances, thus complicating the task of developing universally applicable models [21, 22].
The implications of efective subjectivity detection extend to multiple domains, particularly in
media analysis and information retrieval [23, 24]. In news media analysis, the ability to distinguish
between factual reporting and opinion journalism is vital for maintaining journalistic integrity and
informing readers accurately [25, 26]. For example, a news article that presents a politician’s statement
as fact without context may mislead the audience; thus, identifying subjective content is crucial
for responsible reporting [27, 28]. In the realm of social media monitoring, subjectivity detection
enables platforms to better understand user sentiment and identify potentially biased or misleading
content [29, 30]. This capability is particularly important for combating the spread of misinformation
and propaganda, especially during critical events such as elections or public health crises [31, 32].
Businesses have also found subjectivity detection incredibly valuable for understanding what customers
really think about their products and services. When companies analyze online reviews and social
media posts, being able to separate factual complaints from emotional reactions helps them make
better decisions about product improvements and marketing strategies[33, 34]. Information retrieval
systems also benefit significantly from subjectivity detection [ 35, 36]. Search engines and information
retrieval systems get a similar boost from this technology - imagine how much more useful search
results would be if they could automatically flag whether a piece of content is presenting facts or
someone’s personal opinion. For instance, a user searching for factual information about a medical
condition should receive objective, evidence-based content rather than subjective personal experiences
[37, 38]. Furthermore, subjectivity detection is essential for automated fact-checking systems, which
must diferentiate between verifiable claims and opinion statements. This distinction is crucial for
maintaining the accuracy and reliability of automated content verification tools [39, 40, 41].
In this paper, we outline our approach to the task of subjectivity detection within the CLEF
2025 framework. Our methodology involves leveraging advanced machine learning techniques to
classify sentences as either subjective or objective across monolingual, multilingual, and zero-shot
settings. Preliminary findings indicate promising performance across these various contexts,
demonstrating the potential of our approach to address the challenges associated with subjectivity
detection. Through this work, we hope to push forward our understanding of how machines can
better distinguish between objective reporting and subjective opinion. This research matters because
getting subjectivity detection right has real-world impact - from helping journalists maintain editorial
standards to improving how search engines filter information, and making sentiment analysis tools
more reliable. The challenge of automatically identifying subjective language remains one of the more
fascinating problems in computational linguistics. As our digital world becomes increasingly saturated
with opinions, personal viewpoints, and biased content, the ability to separate facts from opinions
becomes not just academically interesting, but practically essential for building trustworthy NLP
systems. As we navigate the complexities of subjective language, our participation in CLEF 2025 Task 1
serves as a valuable opportunity to contribute to the development of methodologies that can efectively
address these challenges across diverse linguistic and cultural contexts.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Foundational Work on Subjectivity Detection</title>
        <p>
          Subjectivity detection has its roots in the early 2000s, with seminal works laying the groundwork for
subsequent research. Wiebe et al. (1999) pioneered the field with their classification of subjective
and objective sentences, introducing a lexicon-based approach that distinguished between factual and
opinionated content [
          <xref ref-type="bibr" rid="ref12">42</xref>
          ]. This foundational work was expanded in Wiebe et al. (2004), where the
authors elaborated on the significance of subjectivity in natural language processing (NLP) and proposed
a more nuanced framework for identifying subjective expressions [
          <xref ref-type="bibr" rid="ref13">43</xref>
          ]. Pang and Lee (2004) further
advanced the field by diferentiating between subjectivity and sentiment analysis, emphasizing the
importance of context in understanding subjective content [
          <xref ref-type="bibr" rid="ref14">44</xref>
          ]. Their later work (2008) highlighted
the challenges of classifying subjective sentences within various domains, establishing a benchmark
for subsequent studies [
          <xref ref-type="bibr" rid="ref15">45</xref>
          ]. Wilson et al. (2005) contributed to fine-grained opinion recognition,
introducing methods to detect and classify opinions within text, thereby enhancing the granularity of
subjectivity detection [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. Yu and Hatzivassiloglou (2003) provided a critical perspective by focusing on
the separation of facts from opinions, which remains a central challenge in subjectivity detection [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Machine Learning Approaches</title>
        <sec id="sec-2-2-1">
          <title>2.2.1. Traditional ML Methods</title>
          <p>
            The evolution of subjectivity detection has been significantly influenced by machine learning
methodologies. Early approaches predominantly relied on feature-based models that utilized lexical, syntactic,
and semantic features to classify sentences. Support Vector Machines (SVM) and Naive Bayes classifiers
emerged as popular choices, demonstrating efective performance in various datasets. For instance, the
work by Read (2005) [
            <xref ref-type="bibr" rid="ref16">46</xref>
            ] achieved an accuracy of 80% using Naive Bayes, while SVMs, as demonstrated
by Zhang and Liu (2011) [
            <xref ref-type="bibr" rid="ref17">47</xref>
            ], showed superior performance with an F1 score of 0.82.
          </p>
        </sec>
        <sec id="sec-2-2-2">
          <title>2.2.2. Deep Learning Era</title>
          <p>
            The advent of deep learning marked a paradigm shift in subjectivity detection. Neural network
architectures, particularly Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs),
have been employed to capture complex patterns in textual data. Kim (2014) showcased the eficacy
of CNNs in sentiment analysis, achieving state-of-the-art results on benchmark datasets [
            <xref ref-type="bibr" rid="ref18">48</xref>
            ]. The
introduction of attention mechanisms and transformers has further revolutionized the field. The BERT
model (Devlin et al., 2018) [
            <xref ref-type="bibr" rid="ref19">49</xref>
            ] and its variants have set new benchmarks in various NLP tasks, including
subjectivity detection. Liu et al. (2019) demonstrated that fine-tuning BERT for subjectivity detection
yielded significant improvements, achieving an accuracy of 92% on standard datasets [
            <xref ref-type="bibr" rid="ref20">50</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-2-3">
          <title>2.2.3. Multilingual and Cross-lingual Approaches</title>
          <p>
            As the demand for multilingual applications grew, researchers began exploring cross-lingual
subjectivity detection. Cross-lingual word embeddings, such as those proposed by Mikolov et al. (2013) in
their seminal Word2Vec work, facilitated the transfer of knowledge across languages [
            <xref ref-type="bibr" rid="ref21 ref22">51, 52</xref>
            ]. The
development of multilingual BERT (Devlin et al., 2018) and XLM models (Lample &amp; Conneau, 2019) has
further advanced this area, allowing for efective subjectivity detection in multiple languages without
the need for extensive retraining [
            <xref ref-type="bibr" rid="ref19 ref23">49, 53</xref>
            ]. Conneau et al. (2020) introduced XLM-R, which significantly
outperformed multilingual BERT on cross-lingual benchmarks, demonstrating the efectiveness of
scaling multilingual models with larger datasets [
            <xref ref-type="bibr" rid="ref24">54</xref>
            ].
          </p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. CLEF CheckThat! Lab History</title>
        <p>
          The CLEF CheckThat! Lab has played a pivotal role in advancing subjectivity detection methodologies
through its annual competitions. The lab, which began in 2018, has consistently focused on
factchecking and related tasks including subjectivity detection [
          <xref ref-type="bibr" rid="ref25">55</xref>
          ]. In previous years, from 2018 to
2024, the competition has seen a variety of innovative approaches. For instance, the top-performing
systems in 2019 utilized ensemble methods, combining multiple classifiers to enhance performance
[
          <xref ref-type="bibr" rid="ref26">56</xref>
          ]. The 2020 competition introduced new evaluation metrics that focused on precision and recall,
with the best-performing system achieving an F1 score of 0.89 [
          <xref ref-type="bibr" rid="ref27">57</xref>
          ]. The evolution of datasets and
evaluation methodologies has also been noteworthy. The 2021 competition emphasized multilingual
performance, with participants reporting enhanced accuracy in detecting subjectivity across diverse
languages [
          <xref ref-type="bibr" rid="ref28">58</xref>
          ]. The 2022 and 2023 competitions further refined evaluation frameworks, allowing for
a more comprehensive assessment of cross-lingual capabilities [
          <xref ref-type="bibr" rid="ref29 ref30">59, 60</xref>
          ]. The 2023 edition introduced
multimodal approaches, with the winning system by Frick &amp; Vogel (2023) achieving an F1 score of
0.7297 by combining textual and visual features[
          <xref ref-type="bibr" rid="ref31">61</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Datasets and Resources</title>
        <sec id="sec-2-4-1">
          <title>2.4.1. English Datasets</title>
          <p>
            The construction of datasets has been crucial for the development of subjectivity detection systems. One
of the most significant contributions is the "Corpus for Sentence-Level Subjectivity Detection on English
News Articles," which provides a comprehensive collection of annotated sentences, facilitating the
training and evaluation of models [
            <xref ref-type="bibr" rid="ref32">62</xref>
            ]. The annotation guidelines emphasize inter-annotator agreement,
which has been shown to exceed 85%, underscoring the reliability of the dataset. The MPQA corpus,
another foundational resource, has evolved over the years, providing rich annotations for opinionated
language in news articles [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]. OpinionFinder has also been instrumental in providing tools and resources
for subjectivity detection, further enriching the landscape of English datasets [
            <xref ref-type="bibr" rid="ref33">63</xref>
            ].
          </p>
        </sec>
        <sec id="sec-2-4-2">
          <title>2.4.2. Multilingual Datasets</title>
          <p>
            The need for multilingual subjectivity detection has led to the creation of cross-lingual subjectivity
corpora. Resources such as the Multilingual Subjectivity Lexicon and the Multilingual Opinion Corpus
(MOC) have facilitated research across various languages, including Spanish, French, German,
Chinese, and Arabic [
            <xref ref-type="bibr" rid="ref34">64</xref>
            ]. These datasets have highlighted the annotation challenges posed by cultural
diferences in subjectivity perception [ 23]. The evolution of CLEF task datasets has also contributed
significantly to multilingual research, providing a platform for testing and comparing methodologies
across languages. Recent competitions have focused on addressing the challenges of low-resource
languages, with participants developing innovative solutions to enhance subjectivity detection in these
contexts.
          </p>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Evaluation Methodologies</title>
        <p>
          Evaluation methodologies in subjectivity detection have evolved to address the complexities of
multilingual and cross-lingual settings. Standard metrics such as accuracy, F1 score, precision, and recall
remain central to performance evaluation. However, the challenges of cross-lingual evaluation have
necessitated the development of specialized protocols to ensure comparability across languages [
          <xref ref-type="bibr" rid="ref35">65</xref>
          ].
Recent advances in evaluation frameworks have introduced measures that account for cultural bias and
domain adaptation challenges, which are critical for the accurate assessment of subjectivity detection
systems [
          <xref ref-type="bibr" rid="ref36">66</xref>
          ]. The integration of these advanced methodologies has enabled researchers to better
understand the strengths and weaknesses of their models in diverse linguistic contexts.
        </p>
        <p>Overall, the literature on subjectivity detection has evolved significantly over the years, with
foundational works paving the way for sophisticated machine learning and deep learning approaches. The
CLEF CheckThat! Lab has been instrumental in driving research forward, while the development of
diverse datasets and evaluation methodologies has enriched the field. However, ongoing challenges,
particularly in multilingual and low-resource contexts, highlight the need for continued research and
innovation.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Our Approach</title>
      <sec id="sec-3-1">
        <title>3.1. Data Pre-processing</title>
        <p>We adopted diferent data pre-processing and enhancement strategies tailored to the three experimental
settings explored in this study: (1) monolingual training and testing, (2) multilingual learning, and (3)
zero-shot generalization. These configurations enabled systematic evaluation of model performance
under controlled language-specific, cross-lingual, and transfer learning scenarios, particularly within
the context of subjective versus objective classification.</p>
        <sec id="sec-3-1-1">
          <title>3.1.1. Monolingual Setting</title>
          <p>For the monolingual training and evaluation setting, we began by parsing the full development training
data and isolating samples belonging exclusively to the target language under investigation. Each
language was treated independently to assess the classification performance in a controlled monolingual
context. Following this filtration, we conducted a statistical analysis of the class distribution across the
subjective (SUBJ) and objective (OBJ) labels. Table 1 presents the class-wise distribution for each of
the five target languages. A notable degree of class imbalance was observed, with the objective class
typically dominating in most languages—especially in Italian and English. To mitigate this imbalance
and promote better model generalization, we employed synthetic data augmentation. Specifically, we
leveraged GPT-4o to generate additional examples for the underrepresented class in each language.
This augmentation was performed conditionally based on the observed class distribution and was
constrained to maintain semantic and syntactic coherence with the original samples. All preprocessing
operations, including filtration, tokenization, and augmentation, were standardized across languages to
ensure consistency and reproducibility in the experimental pipeline.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.1.2. Multilingual Setting</title>
          <p>For the multilingual setting, we retained the full cross-lingual dataset encompassing all languages
involved in the task. A comprehensive statistical analysis was conducted to quantify the distribution
of data across language partitions and subjectivity classes (SUBJ vs. OBJ). Table 1 presents the
resulting class-wise distribution in both the multilingual development and training sets. This revealed
significant class imbalance, especially in underrepresented language subsets, which required targeted
data augmentation. To address these imbalances, we employed GPT-4o to generate synthetic
examples, particularly for low-resource language–class pairs. The generation prompts were designed with
multilingual awareness, incorporating linguistic features, culturally appropriate idioms, and syntactic
norms of each target language to enhance the realism and contextual alignment of the synthetic data.
Augmentation outputs were rigorously filtered to ensure semantic validity, label fidelity, and language
isolation, thereby preventing unintended language leakage or contamination. The resulting multilingual
dataset was tokenized using a consistent scheme and validated for compatibility with the multilingual
transformer models adopted for fine-tuning. This process enabled balanced exposure to both classes
and promoted robust cross-lingual generalization.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Methodology</title>
        <p>Our methodology aligns with the three evaluation settings defined by the shared task—monolingual,
multilingual, and zero-shot generalization. We design our approach to explore both fine-tuning and
prompting-based paradigms across high- and low-resource language scenarios.</p>
        <sec id="sec-3-2-1">
          <title>3.2.1. Fine-Tuning of Transformer Models (Settings 1 &amp; 2)</title>
          <p>
            For both the monolingual and multilingual settings, we employ supervised fine-tuning of pre-trained
transformer-based language models. The following models were used in our experiments:
• BERT: BERT-Base and BERT-Large [
            <xref ref-type="bibr" rid="ref19">49</xref>
            ]
• RoBERTa: RoBERTa-Base and RoBERTa-Large [
            <xref ref-type="bibr" rid="ref20">50</xref>
            ]
• XLM-RoBERTa: XLM-RoBERTa-Base and XLM-RoBERTa-Large [
            <xref ref-type="bibr" rid="ref24">54</xref>
            ]
• Modern-BERT: Modern-BERT-Base and Modern-BERT-Large [
            <xref ref-type="bibr" rid="ref37">67</xref>
            ]
These models are known for their strong performance in cross-lingual and binary classification tasks.
We fine-tune them using an augmented version of the training set and evaluate them on the dev-test
split. The training loss is defined using the standard cross-entropy objective:

1 ∑︁  log ˆ
ℒCE = − 
=1
(1)
where  ∈ {0, 1} is the true label (SUBJ or OBJ), and ˆ is the predicted probability.
In the multilingual setting, the training set includes examples from all available languages, whereas in
the monolingual setting, language-specific subsets are isolated for both training and evaluation.
          </p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. In-Context Learning and Dynamic Prompting Framework (Setting 3)</title>
          <p>To address zero-shot generalization for unseen languages (e.g., Ukrainian, Polish, Romanian, Greek),
we employ In-Context Learning (ICL) using large-scale open-source language models. We begin with a
zero-shot inference setup using the Qwen-3-32B model, where the prompt consists solely of the query
instance:</p>
          <p>= _
This formulation relies entirely on the pretrained knowledge of the model to perform binary classification
(SUBJ or OBJ), without providing any labeled support examples.</p>
          <p>Initial results from zero-shot ICL showed moderate performance, but were limited by domain and
language mismatch. To mitigate this, we extend the framework with a dynamic few-shot prompting
strategy using a teacher–student architecture:
• Student model (Qwen-2.5-3B): Generates pseudo-labeled training data in the target (unseen)
languages.
• Teacher model (Qwen-3-32B): Scores and filters the generated samples based on label
consistency and semantic alignment.</p>
          <p>The filtered samples form a high-quality candidate pool from which few-shot examples are selected
dynamically for each test input. This selection is driven by cosine similarity between the sentence
embeddings:
(_, _) = cos ((_), (_))
(2)
(3)
where () denotes the embedding of input . For every query , the top- most similar support
examples  are retrieved to construct a contextual prompt.</p>
          <p>This adaptive approach improves zero-shot generalization by incorporating semantically coherent
examples, mitigating language shift, and simulating low-resource supervision through synthetic data
curation.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental Results</title>
      <p>4.1. Setup
Model fine-tuning was conducted using high-performance GPUs to support eficient training across
large multilingual datasets. Specifically, we utilized an NVIDIA RTX 4090 with 24 GB VRAM and
an NVIDIA A100 with 40 GB VRAM. This hardware configuration enabled efective fine-tuning of
transformer-based models with large parameter sizes and facilitated experimentation with various
hyperparameter and resampling strategies under diferent task settings.</p>
      <p>In addition to training infrastructure, we employed an NVIDIA H100 GPU for locally hosting
large-scale models via the vLLM inference engine. This setup enabled rapid and cost-efective
evaluation of models in real-time settings, including response generation, confidence scoring, and
hybrid inference with ensemble strategies. The local hosting capability was crucial for integrating
trained models into downstream verification pipelines with minimal latency.</p>
      <p>This heterogeneous compute environment ensured both training eficiency and deployment
scalability across the various stages of our experimentation.</p>
      <sec id="sec-4-1">
        <title>4.2. Results</title>
      </sec>
      <sec id="sec-4-2">
        <title>Evaluation Settings</title>
        <p>We evaluate the performance of all transformer-based models under three primary settings:
• Monolingual Fine-Tuning: Models are fine-tuned and evaluated on individual language datasets.</p>
        <p>This setting assesses language-specific performance in resource-constrained conditions.
• Multilingual Fine-Tuning: Models are fine-tuned on an aggregated dataset comprising multiple
languages (English, Arabic, Bulgarian, German, and Italian). This setup evaluates cross-lingual
learning and robustness in a unified multilingual framework.
• Zero-Shot Transfer: Models fine-tuned in the multilingual setting are directly evaluated on
unseen languages (Polish, Ukrainian, Greek, and Romanian) without any additional training. This
setting examines the models’ generalization capabilities to languages not seen during fine-tuning.
All evaluations are performed on the oficial dev-test splits provided as part of the shared task.</p>
        <sec id="sec-4-2-1">
          <title>Monolingual Fine-Tuning</title>
        </sec>
        <sec id="sec-4-2-2">
          <title>Multilingual Fine-Tuning</title>
          <p>We observe that large models generally outperform their base counterparts, indicating a consistent
benefit from increased model capacity. Among all models, XLM-RoBERTa-Large achieves the highest
multilingual F1 score (0.6753), while also performing robustly in individual language settings such as
Bulgarian and German. Interestingly, monolingual fine-tuning leads to strong results in resource-rich
settings (e.g., Italian), but exhibits performance drops in lower-resource languages. This motivates the
use of cross-lingual and multilingual pretraining as a means to mitigate language imbalance and improve
generalization. In the zero-shot transfer setting, performance across unseen languages such as Greek
and Polish remains modest, reflecting the challenge of applying pretrained models directly to languages
not encountered during training. Since no prompt optimization or language-specific adaptation was
applied, these results serve as a baseline for evaluating zero-shot capability. An alternative approach
could involve translating inputs from unseen languages into a high-resource language (e.g., English),
followed by classification using a monolingually fine-tuned model. Although translation may introduce
noise, it could ofer improved performance over direct zero-shot inference. Future work may explore
such translation-based strategies alongside prompt tuning or few-shot adaptation to better support
underrepresented languages.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this work, we presented a robust approach for distinguishing subjective from objective content
as part of Task 1 in the CLEF 2025 CheckThat! Lab. By fine-tuning transformer-based models and
addressing class imbalance through targeted resampling and weighting techniques, our system achieves
consistent performance across monolingual, multilingual, and zero-shot evaluation settings. Subjectivity
detection serves as a crucial preliminary step in automated fact-checking pipelines by helping to identify
opinionated or biased statements that require further scrutiny. Our method’s strong adaptability to
low-resource and cross-lingual scenarios demonstrates the efectiveness of leveraging multilingual
pretrained representations for this task. Detailed error analysis further highlighted linguistic and
contextual nuances influencing classification outcomes. Overall, our findings underscore the importance
of multilingual and balanced data-driven modeling in enhancing the reliability of fact-checking systems
and combating the spread of digital misinformation.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Future Work</title>
      <p>Future research could focus on integrating subjectivity detection with downstream fact-checking
components such as claim extraction and evidence retrieval to develop more comprehensive verification
pipelines. Expanding the model’s coverage to additional languages and dialects would increase its
global applicability. A key direction is developing models that achieve improved generalization and
robustness on unseen languages, enhancing zero-shot cross-lingual transfer capabilities. Exploring
advanced methods to address class imbalance, including adaptive loss functions and data augmentation,
may further improve performance on less frequent classes. Incorporating richer contextual and
pragmatic features, such as discourse relations and source reliability, could improve detection of nuanced
subjectivity. Additionally, adopting continual learning and domain adaptation strategies would help
maintain efectiveness amid evolving misinformation trends and new content domains.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used Claude (Anthropic) and ChatGPT-4 in order
to: perform grammar and spelling check, improve writing style and paraphrase and reword sections
for clarity and conciseness. After using these tool(s)/service(s), the author(s) thoroughly reviewed,
critically evaluated and edited all content to ensure accuracy and alignment with research objectives.
The author(s) take(s) full responsibility for the publication’s content.
[12] B. Pang, L. Lee, S. Vaithyanathan, Thumbs up? sentiment classification using machine learning
techniques, in: Proceedings of the 2002 Conference on Empirical Methods in Natural Language
Processing (EMNLP 2002), Association for Computational Linguistics, 2002, pp. 79–86. URL:
https://aclanthology.org/W02-1011/. doi:10.3115/1118693.1118704.
[13] K. Dave, S. Lawrence, D. Pennock, Mining the peanut gallery: Opinion extraction and semantic
classification of product reviews, Mining the Peanut Gallery: Opinion Extraction and Semantic
Classification of Product Reviews 775152 (2003). doi: 10.1145/775152.775226.
[14] B. Liu, Sentiment Analysis: Mining Opinions, Sentiments, and Emotions, 2015. doi:10.1017/</p>
      <p>CBO9781139084789.
[15] S. M. Mohammad, 9 - sentiment analysis: Detecting valence, emotions, and other afectual states
from text, in: H. L. Meiselman (Ed.), Emotion Measurement, Woodhead Publishing, 2016, pp. 201–
237. URL: https://www.sciencedirect.com/science/article/pii/B9780081005088000096. doi:https:
//doi.org/10.1016/B978-0-08-100508-8.00009-6.
[16] L. Zhang, S. Wang, B. Liu, Deep learning for sentiment analysis : A survey, Wiley Interdisciplinary</p>
      <p>Reviews: Data Mining and Knowledge Discovery 8 (2018). doi:10.1002/widm.1253.
[17] F. Benamara, C. Cesarano, A. Picariello, D. Reforgiato, V. Subrahmanian, Sentiment analysis:
Adjectives and adverbs are better than adjectives alone, 2007. 2007 International Conference on
Weblogs and Social Media, ICWSM 2007 ; Conference date: 26-03-2007 Through 28-03-2007.
[18] M. Taboada, J. Brooke, M. Tofiloski, K. Voll, M. Stede, Lexicon-based methods for sentiment
analysis, Computational Linguistics 37 (2011) 267–307. URL: https://aclanthology.org/J11-2001/.
doi:10.1162/COLI_a_00049.
[19] R. Mihalcea, C. Banea, J. Wiebe, Learning multilingual subjective language via cross-lingual
projections, in: A. Zaenen, A. van den Bosch (Eds.), Proceedings of the 45th Annual Meeting of
the Association of Computational Linguistics, Association for Computational Linguistics, Prague,
Czech Republic, 2007, pp. 976–983. URL: https://aclanthology.org/P07-1123/.
[20] C. Banea, R. Mihalcea, J. Wiebe, S. Hassan, Multilingual subjectivity analysis using machine
translation, in: M. Lapata, H. T. Ng (Eds.), Proceedings of the 2008 Conference on Empirical
Methods in Natural Language Processing, Association for Computational Linguistics, Honolulu,
Hawaii, 2008, pp. 127–135. URL: https://aclanthology.org/D08-1014/.
[21] E. Boiy, M.-F. Moens, A machine learning approach to sentiment analysis in multilingual web
texts, Inf. Retr. 12 (2009) 526–558. doi:10.1007/s10791-008-9070-z.
[22] A. Balahur, M. Turchi, Multilingual sentiment analysis using machine translation?, in: A. Balahur,
A. Montoyo, P. M. Barco, E. Boldrini (Eds.), Proceedings of the 3rd Workshop in Computational
Approaches to Subjectivity and Sentiment Analysis, Association for Computational Linguistics,
Jeju, Korea, 2012, pp. 52–60. URL: https://aclanthology.org/W12-3709/.
[23] A. Balahur, R. Steinberger, M. Kabadjov, V. Zavarella, E. van der Goot, M. Halkia, B. Pouliquen,
J. Belyaeva, Sentiment analysis in the news, in: N. Calzolari, K. Choukri, B. Maegaard, J.
Mariani, J. Odijk, S. Piperidis, M. Rosner, D. Tapias (Eds.), Proceedings of the Seventh International
Conference on Language Resources and Evaluation (LREC’10), European Language Resources
Association (ELRA), Valletta, Malta, 2010. URL: https://aclanthology.org/L10-1623/.
[24] F. Hamborg, K. Donnay, B. Gipp, Automated identification of media bias in news articles: an
interdisciplinary literature review, International Journal on Digital Libraries 20 (2019). doi:10.
1007/s00799-018-0261-y.
[25] M. Recasens, C. Danescu-Niculescu-Mizil, D. Jurafsky, Linguistic models for analyzing and
detecting biased language, in: H. Schuetze, P. Fung, M. Poesio (Eds.), Proceedings of the 51st
Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),
Association for Computational Linguistics, Sofia, Bulgaria, 2013, pp. 1650–1659. URL: https://
aclanthology.org/P13-1162/.
[26] C. Hube, B. Fetahu, Detecting biased statements in wikipedia, 2018, pp. 1779–1786. doi:10.1145/
3184558.3191640.
[27] E. Baumer, E. Elovic, Y. Qin, F. Polletta, G. Gay, Testing and comparing computational approaches
for identifying the language of framing in political news, in: R. Mihalcea, J. Chai, A. Sarkar
(Eds.), Proceedings of the 2015 Conference of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies, Association for Computational
Linguistics, Denver, Colorado, 2015, pp. 1472–1482. URL: https://aclanthology.org/N15-1171/.
doi:10.3115/v1/N15-1171.
[28] M. Iyyer, P. Enns, J. Boyd-Graber, P. Resnik, Political ideology detection using recursive neural
networks, in: K. Toutanova, H. Wu (Eds.), Proceedings of the 52nd Annual Meeting of the
Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational
Linguistics, Baltimore, Maryland, 2014, pp. 1113–1122. URL: https://aclanthology.org/P14-1105/.
doi:10.3115/v1/P14-1105.
[29] A. Pak, P. Paroubek, Twitter as a corpus for sentiment analysis and opinion mining, in: N. Calzolari,
K. Choukri, B. Maegaard, J. Mariani, J. Odijk, S. Piperidis, M. Rosner, D. Tapias (Eds.), Proceedings of
the Seventh International Conference on Language Resources and Evaluation (LREC’10), European
Language Resources Association (ELRA), Valletta, Malta, 2010. URL: https://aclanthology.org/
L10-1263/.
[30] E. Kouloumpis, T. Wilson, J. Moore, Twitter sentiment analysis: The good the bad and the omg!,
2011.
[31] S. Vosoughi, D. Roy, S. Aral, The spread of true and false news online, Science 359 (2018) 1146–1151.</p>
      <p>doi:10.1126/science.aap9559.
[32] P. N. Ahmad, A. Shah, K. Lee, Enhanced propaganda detection in public social media discussions
using a fine-tuned deep learning model: A difusion of innovation perspective, Future Internet 17
(2025) 212. doi:10.3390/fi17050212.
[33] M. Hu, B. Liu, Mining and summarizing customer reviews, Proceedings of the tenth ACM
SIGKDD international conference on Knowledge discovery and data mining (2004). URL: https:
//api.semanticscholar.org/CorpusID:207155218.
[34] X. Ding, B. Liu, P. Yu, A holistic lexicon-based approach to opinion mining, 2008, pp. 231–240.</p>
      <p>doi:10.1145/1341531.1341561.
[35] K. Eguchi, V. Lavrenko, Sentiment retrieval using generative models, in: D. Jurafsky, E. Gaussier
(Eds.), Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing,
Association for Computational Linguistics, Sydney, Australia, 2006, pp. 345–354. URL: https:
//aclanthology.org/W06-1641/.
[36] M. Zhang, X. Ye, A generation model to unify topic relevance and lexicon-based sentiment
for opinion retrieval, in: Proceedings of the 31st Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval, SIGIR ’08, Association for Computing
Machinery, New York, NY, USA, 2008, p. 411–418. URL: https://doi.org/10.1145/1390334.1390405.
doi:10.1145/1390334.1390405.
[37] L. Soldaini, E. Yom-Tov, Inferring individual attributes from search engine queries and auxiliary
information, in: Proceedings of the 26th International Conference on World Wide Web, WWW ’17,
International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva,
CHE, 2017, p. 293–301. URL: https://doi.org/10.1145/3038912.3052629. doi:10.1145/3038912.
3052629.
[38] R. White, E. Horvitz, Cyberchondria: Studies of the escalation of medical concerns in web search,</p>
      <p>ACM Trans. Inf. Syst. 27 (2009). doi:10.1145/1629096.1629101.
[39] J. Thorne, A. Vlachos, Automated fact checking: Task formulations, methods and future directions,
in: E. M. Bender, L. Derczynski, P. Isabelle (Eds.), Proceedings of the 27th International Conference
on Computational Linguistics, Association for Computational Linguistics, Santa Fe, New Mexico,
USA, 2018, pp. 3346–3359. URL: https://aclanthology.org/C18-1283/.
[40] N. Kotonya, F. Toni, Explainable automated fact-checking for public health claims, in: B.
Webber, T. Cohn, Y. He, Y. Liu (Eds.), Proceedings of the 2020 Conference on Empirical Methods in
Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020,
pp. 7740–7754. URL: https://aclanthology.org/2020.emnlp-main.623/. doi:10.18653/v1/2020.
emnlp-main.623.
[41] T. Alhindi, S. Petridis, S. Muresan, Where is your evidence: Improving fact-checking by justification</p>
    </sec>
    <sec id="sec-8">
      <title>A. Training Loss Plots</title>
      <p>In this appendix, we present training loss plots for each language–model combination.</p>
      <sec id="sec-8-1">
        <title>A.1. Mono-Lingual Loss Plots</title>
        <sec id="sec-8-1-1">
          <title>A.1.1. Arabic</title>
        </sec>
        <sec id="sec-8-1-2">
          <title>A.1.3. German</title>
          <p>A.1.4. Italian</p>
          <p>A.2. Multi-Lingual Loss Plots
(e) RoBERTa-base (f) RoBERTa-large (g) XLM-RoBERTa-base
Figure 7: Training loss plots for combined multilingual models.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <article-title>Tracking point of view in narrative</article-title>
          ,
          <source>Computational Linguistics</source>
          <volume>20</volume>
          (
          <year>1994</year>
          )
          <fpage>233</fpage>
          -
          <lpage>287</lpage>
          . URL: https://aclanthology.org/J94-2004/.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <year>2008</year>
          . doi:
          <volume>10</volume>
          .1561/1500000011.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Banfield</surname>
          </string-name>
          ,
          <article-title>Unspeakable Sentences (Routledge Revivals): Narration and Representation in the Language of Fiction</article-title>
          , 1st ed.,
          <year>1982</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <article-title>Recognizing contextual polarity in phrase-level sentiment analysis</article-title>
          , in: R.
          <string-name>
            <surname>Mooney</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Brew</surname>
            ,
            <given-names>L.-F.</given-names>
          </string-name>
          <string-name>
            <surname>Chien</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          Kirchhof (Eds.),
          <source>Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing</source>
          , Association for Computational Linguistics, Vancouver, British Columbia, Canada,
          <year>2005</year>
          , pp.
          <fpage>347</fpage>
          -
          <lpage>354</lpage>
          . URL: https://aclanthology.org/H05-1044/.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          , T. Wilson,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cardie</surname>
          </string-name>
          ,
          <article-title>Annotating expressions of opinions and emotions in language, Language Resources and Evaluation (formerly Computers and</article-title>
          the Humanities)
          <volume>39</volume>
          (
          <year>2005</year>
          )
          <fpage>164</fpage>
          -
          <lpage>210</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10579-005-7880-9.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Sentiment analysis and opinion mining</article-title>
          , volume
          <volume>5</volume>
          ,
          <year>2012</year>
          . doi:
          <volume>10</volume>
          .2200/ S00416ED1V01Y201204HLT016.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Hatzivassiloglou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <article-title>Efects of adjective orientation and gradability on sentence subjectivity</article-title>
          ,
          <source>in: COLING 2000 Volume 1: The 18th International Conference on Computational Linguistics</source>
          ,
          <year>2000</year>
          . URL: https://aclanthology.org/C00-1044/.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Hatzivassiloglou</surname>
          </string-name>
          ,
          <article-title>Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences</article-title>
          ,
          <source>in: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2003</year>
          , pp.
          <fpage>129</fpage>
          -
          <lpage>136</lpage>
          . URL: https://aclanthology. org/W03-1017/.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          , E. Rilof,
          <article-title>Creating subjective and objective sentence classifiers from unannotated texts</article-title>
          , volume
          <volume>3406</volume>
          ,
          <year>2005</year>
          , pp.
          <fpage>486</fpage>
          -
          <lpage>497</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -30586-6_
          <fpage>53</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rilof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <article-title>Learning extraction patterns for subjective expressions</article-title>
          ,
          <source>in: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2003</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          . URL: https://aclanthology.org/W03-1014/.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Turney</surname>
          </string-name>
          ,
          <article-title>Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews</article-title>
          , in: P.
          <string-name>
            <surname>Isabelle</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Charniak</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Lin</surname>
          </string-name>
          (Eds.),
          <article-title>Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Philadelphia, Pennsylvania, USA,
          <year>2002</year>
          , pp.
          <fpage>417</fpage>
          -
          <lpage>424</lpage>
          . URL: https://aclanthology.org/P02-1053/. doi:
          <volume>10</volume>
          .3115/1073083.1073153. modeling, in: J.
          <string-name>
            <surname>Thorne</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Vlachos</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Cocarascu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Christodoulopoulos</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Mittal (Eds.),
          <source>Proceedings of the First Workshop on Fact Extraction and VERification (FEVER)</source>
          ,
          <source>Association for Computational Linguistics</source>
          , Brussels, Belgium,
          <year>2018</year>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>90</lpage>
          . URL: https://aclanthology.org/ W18-5513/. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>W18</fpage>
          -5513.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Bruce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <article-title>Recognizing subjectivity: a case study in manual tagging</article-title>
          ,
          <source>Nat. Lang. Eng</source>
          .
          <volume>5</volume>
          (
          <year>1999</year>
          )
          <fpage>187</fpage>
          -
          <lpage>205</lpage>
          . URL: https://doi.org/10.1017/S1351324999002181. doi:
          <volume>10</volume>
          .1017/ S1351324999002181.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          , T. Wilson,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bruce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Learning subjective language</article-title>
          ,
          <source>Comput. Linguist</source>
          .
          <volume>30</volume>
          (
          <year>2004</year>
          )
          <fpage>277</fpage>
          -
          <lpage>308</lpage>
          . URL: https://doi.org/10.1162/0891201041850885. doi:
          <volume>10</volume>
          .1162/ 0891201041850885.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>B.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts</article-title>
          ,
          <source>in: Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)</source>
          , Barcelona, Spain,
          <year>2004</year>
          , pp.
          <fpage>271</fpage>
          -
          <lpage>278</lpage>
          . URL: https://aclanthology. org/P04-1035/. doi:
          <volume>10</volume>
          .3115/1218955.1218990.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>B.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Opinion mining and sentiment analysis</article-title>
          ,
          <source>Found. Trends Inf. Retr</source>
          .
          <volume>2</volume>
          (
          <issue>2008</issue>
          )
          <fpage>1</fpage>
          -
          <lpage>135</lpage>
          . URL: https://doi.org/10.1561/1500000011. doi:
          <volume>10</volume>
          .1561/1500000011.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>J.</given-names>
            <surname>Read</surname>
          </string-name>
          ,
          <article-title>Using emoticons to reduce dependency in machine learning techniques for sentiment classification</article-title>
          , in: C.
          <string-name>
            <surname>Callison-Burch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          Wan (Eds.),
          <source>Proceedings of the ACL Student Research Workshop</source>
          , Association for Computational Linguistics, Ann Arbor, Michigan,
          <year>2005</year>
          , pp.
          <fpage>43</fpage>
          -
          <lpage>48</lpage>
          . URL: https://aclanthology.org/P05-2008/.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , B. Liu,
          <article-title>Identifying noun product features that imply opinions</article-title>
          , in: D.
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Matsumoto</surname>
          </string-name>
          , R. Mihalcea (Eds.),
          <article-title>Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</article-title>
          , Portland, Oregon, USA,
          <year>2011</year>
          , pp.
          <fpage>575</fpage>
          -
          <lpage>580</lpage>
          . URL: https://aclanthology.org/P11-2101/.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <article-title>Convolutional neural networks for sentence classification</article-title>
          , in: A.
          <string-name>
            <surname>Moschitti</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Pang</surname>
          </string-name>
          , W. Daelemans (Eds.),
          <source>Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Doha, Qatar,
          <year>2014</year>
          , pp.
          <fpage>1746</fpage>
          -
          <lpage>1751</lpage>
          . URL: https://aclanthology.org/D14-1181/. doi:
          <volume>10</volume>
          .3115/v1/
          <fpage>D14</fpage>
          -1181.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          , in: J.
          <string-name>
            <surname>Burstein</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Doran</surname>
          </string-name>
          , T. Solorio (Eds.),
          <source>Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>1</volume>
          (Long and Short Papers),
          <source>Association for Computational Linguistics</source>
          , Minneapolis, Minnesota,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          . URL: https://aclanthology.org/N19-1423/. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N19</fpage>
          -1423.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>Roberta: A robustly optimized bert pretraining approach</article-title>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1907</year>
          .
          <volume>11692</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , G. Corrado,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dean</surname>
          </string-name>
          ,
          <article-title>Eficient estimation of word representations in vector space</article-title>
          ,
          <year>2013</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , W.-t. Yih, G. Zweig,
          <article-title>Linguistic regularities in continuous space word representations</article-title>
          , in: L.
          <string-name>
            <surname>Vanderwende</surname>
          </string-name>
          , H.
          <string-name>
            <surname>Daumé</surname>
            <given-names>III</given-names>
          </string-name>
          , K. Kirchhof (Eds.),
          <source>Proceedings of the</source>
          <year>2013</year>
          <article-title>Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</article-title>
          , Atlanta, Georgia,
          <year>2013</year>
          , pp.
          <fpage>746</fpage>
          -
          <lpage>751</lpage>
          . URL: https://aclanthology.org/N13-1090/.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>A.</given-names>
            <surname>Conneau</surname>
          </string-name>
          , G. Lample,
          <article-title>Cross-lingual language model pretraining</article-title>
          , Curran Associates Inc.,
          <string-name>
            <surname>Red</surname>
            <given-names>Hook</given-names>
          </string-name>
          ,
          <string-name>
            <surname>NY</surname>
          </string-name>
          , USA,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>A.</given-names>
            <surname>Conneau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Khandelwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chaudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Wenzek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Guzmán</surname>
          </string-name>
          , E. Grave,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          ,
          <article-title>Unsupervised cross-lingual representation learning at scale</article-title>
          , in: D.
          <string-name>
            <surname>Jurafsky</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Chai</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Schluter</surname>
          </string-name>
          , J. Tetreault (Eds.),
          <article-title>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>8440</fpage>
          -
          <lpage>8451</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .acl-main.
          <volume>747</volume>
          /. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          . acl-main.
          <volume>747</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>P.</given-names>
            <surname>Atanasova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barron-Cedeno</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Elsayed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Suwaileh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zaghouani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kyuchukov</surname>
          </string-name>
          , G. Martino,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <article-title>Overview of the clef-2018 checkthat! lab on automatic identification and verification of political claims</article-title>
          . task 1: Check-worthiness,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1808</year>
          .
          <volume>05542</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [56]
          <string-name>
            <given-names>C.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. G.</given-names>
            <surname>Simonsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lioma</surname>
          </string-name>
          ,
          <article-title>Neural weakly supervised fact check-worthiness detection with contrastive sampling-based ranking loss</article-title>
          , in: L.
          <string-name>
            <surname>Cappellato</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ferro</surname>
            ,
            <given-names>D. E.</given-names>
          </string-name>
          <string-name>
            <surname>Losada</surname>
          </string-name>
          , H. Müller (Eds.),
          <source>Working Notes of CLEF 2019 - Conference and Labs of the Evaluation Forum, Lugano, Switzerland, September</source>
          <volume>9</volume>
          -
          <issue>12</issue>
          ,
          <year>2019</year>
          , volume
          <volume>2380</volume>
          <source>of CEUR Workshop Proceedings</source>
          , CEURWS.org,
          <year>2019</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2380</volume>
          /paper_56.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [57]
          <string-name>
            <given-names>A.</given-names>
            <surname>Barrón-Cedeño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Elsayed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          , G. Da San Martino, M. Hasanain,
          <string-name>
            <given-names>R.</given-names>
            <surname>Suwaileh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Haouari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Babulkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hamdan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. S.</given-names>
            <surname>Ali</surname>
          </string-name>
          , Overview of checkthat! 2020:
          <article-title>Automatic identification and verification of claims in social media</article-title>
          ,
          <source>in: Experimental IR Meets Multilinguality, Multimodality, and Interaction: 11th International Conference of the CLEF Association, CLEF</source>
          <year>2020</year>
          , Thessaloniki, Greece,
          <source>September 22-25</source>
          ,
          <year>2020</year>
          , Proceedings, SpringerVerlag, Berlin, Heidelberg,
          <year>2020</year>
          , p.
          <fpage>215</fpage>
          -
          <lpage>236</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -58219-7_
          <fpage>17</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -58219-7_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [58]
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          , G. Da San Martino, T. Elsayed,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barrón-Cedeño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Míguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Haouari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hasanain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Mansour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hamdan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Babulkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kutlu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Kartal</surname>
          </string-name>
          ,
          <article-title>Overview of the clef-2021 checkthat! lab on detecting check-worthy claims, previously fact-checked claims, andnbsp;fake news</article-title>
          ,
          <source>in: Experimental IR Meets Multilinguality, Multimodality, and Interaction: 12th International Conference of the CLEF Association, CLEF</source>
          <year>2021</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          ,
          <source>September 21-24</source>
          ,
          <year>2021</year>
          , Proceedings, SpringerVerlag, Berlin, Heidelberg,
          <year>2021</year>
          , p.
          <fpage>264</fpage>
          -
          <lpage>291</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -85251-1_
          <fpage>19</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -85251-1_
          <fpage>19</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [59]
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barrón-Cedeño</surname>
          </string-name>
          , G. Da San Martino,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Míguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Caselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kutlu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zaghouani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mubarak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nikolov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Babulkov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Kartal</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Beltrán,</surname>
          </string-name>
          <article-title>The clef-2022 checkthat! lab on fighting the covid-19 infodemic and fake news detection</article-title>
          ,
          <source>in: Advances in Information Retrieval: 44th European Conference on IR Research</source>
          , ECIR
          <year>2022</year>
          , Stavanger, Norway,
          <source>April 10-14</source>
          ,
          <year>2022</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>II</given-names>
          </string-name>
          , SpringerVerlag, Berlin, Heidelberg,
          <year>2022</year>
          , p.
          <fpage>416</fpage>
          -
          <lpage>428</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -99739-7_
          <fpage>52</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -99739-7_
          <fpage>52</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [60]
          <string-name>
            <given-names>A.</given-names>
            <surname>Barrón-Cedeño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Caselli</surname>
          </string-name>
          , G. Da San Martino, T. Elsayed,
          <string-name>
            <given-names>A.</given-names>
            <surname>Galassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Haouari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Nandi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Cheema</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Azizov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <article-title>The clef-2023 checkthat! lab: Checkworthiness, subjectivity, political bias, factuality, andnbsp;authority</article-title>
          ,
          <source>in: Advances in Information Retrieval: 45th European Conference on Information Retrieval</source>
          ,
          <string-name>
            <surname>ECIR</surname>
          </string-name>
          <year>2023</year>
          , Dublin, Ireland, April 2-
          <issue>6</issue>
          ,
          <year>2023</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>III</given-names>
          </string-name>
          , Springer-Verlag, Berlin, Heidelberg,
          <year>2023</year>
          , p.
          <fpage>506</fpage>
          -
          <lpage>517</lpage>
          . URL: https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -28241-6_
          <fpage>59</fpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -28241-6_
          <fpage>59</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [61]
          <string-name>
            <given-names>R.</given-names>
            <surname>Frick</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Vogel</surname>
          </string-name>
          , Fraunhofer sit at checkthat! 2023:
          <article-title>Mixing single-modal classifiers to estimate the check-worthiness of multi-modal tweets</article-title>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2307.00610.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [62]
          <string-name>
            <given-names>F.</given-names>
            <surname>Antici</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Galassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Korre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Muti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fedotova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barrón-Cedeño</surname>
          </string-name>
          ,
          <article-title>A corpus for sentence-level subjectivity detection on English news articles</article-title>
          , in: N.
          <string-name>
            <surname>Calzolari</surname>
            , M.-
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Kan</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Hoste</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lenci</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Sakti</surname>
          </string-name>
          , N. Xue (Eds.),
          <source>Proceedings of the 2024 Joint International Conference on Computational Linguistics</source>
          ,
          <article-title>Language Resources and Evaluation (LREC-COLING 2024), ELRA</article-title>
          and
          <string-name>
            <given-names>ICCL</given-names>
            ,
            <surname>Torino</surname>
          </string-name>
          , Italia,
          <year>2024</year>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>285</lpage>
          . URL: https://aclanthology.org/
          <year>2024</year>
          .lrec-main.
          <volume>25</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [63]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Somasundaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kessler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cardie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Rilof</surname>
          </string-name>
          , S. Patwardhan,
          <article-title>OpinionFinder: A system for subjectivity analysis</article-title>
          , in: D.
          <string-name>
            <surname>Byron</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Venkataraman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          Zhang (Eds.),
          <source>Proceedings of HLT/EMNLP 2005 Interactive Demonstrations, Association for Computational Linguistics</source>
          , Vancouver, British Columbia, Canada,
          <year>2005</year>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>35</lpage>
          . URL: https: //aclanthology.org/H05-2018/.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [64]
          <string-name>
            <given-names>O.</given-names>
            <surname>Tsur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rappoport</surname>
          </string-name>
          ,
          <article-title>What's in a hashtag? content based prediction of the spread of ideas in microblogging communities</article-title>
          ,
          <source>in: Proceedings of the Fifth ACM International Conference on Web Search and Data Mining</source>
          , WSDM '12,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2012</year>
          , p.
          <fpage>643</fpage>
          -
          <lpage>652</lpage>
          . URL: https://doi.org/10.1145/2124295.2124320. doi:
          <volume>10</volume>
          .1145/2124295. 2124320.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [65]
          <string-name>
            <given-names>B.</given-names>
            <surname>Plank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Søgaard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Goldberg</surname>
          </string-name>
          ,
          <article-title>Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss</article-title>
          , in: K. Erk,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Smith</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>2</volume>
          :
          <string-name>
            <surname>Short</surname>
            <given-names>Papers)</given-names>
          </string-name>
          ,
          <source>Association for Computational Linguistics</source>
          , Berlin, Germany,
          <year>2016</year>
          , pp.
          <fpage>412</fpage>
          -
          <lpage>418</lpage>
          . URL: https://aclanthology.org/P16-2067/. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>P16</fpage>
          -2067.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [66]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hercig</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kral</surname>
          </string-name>
          ,
          <article-title>Evaluation datasets for cross-lingual semantic textual similarity</article-title>
          , in: R. Mitkov, G. Angelova (Eds.),
          <source>Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP</source>
          <year>2021</year>
          ), INCOMA Ltd.,
          <string-name>
            <surname>Held</surname>
            <given-names>Online</given-names>
          </string-name>
          ,
          <year>2021</year>
          , pp.
          <fpage>524</fpage>
          -
          <lpage>529</lpage>
          . URL: https: //aclanthology.org/
          <year>2021</year>
          .ranlp-
          <volume>1</volume>
          .59/.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [67]
          <string-name>
            <given-names>B.</given-names>
            <surname>Warner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chafin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Clavié</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Weller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Hallström</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Taghadouini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gallagher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ladhak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Aarsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cooper</surname>
          </string-name>
          , G. Adams,
          <string-name>
            <given-names>J.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Poli</surname>
          </string-name>
          , Smarter, better, faster, longer
          <article-title>: A modern bidirectional encoder for fast, memory eficient, and long context finetuning and inference, 2024</article-title>
          . URL: https://arxiv.org/abs/2412.13663. arXiv:
          <volume>2412</volume>
          .
          <fpage>13663</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>