<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>CSECU-Learners at CheckThat! 2025: Multilingual Transformer-based Approach for Subjectivity Detection in News Articles Across Multilingual and Zero-shot Settings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Monir Ahmad</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abu Nowshed Chy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Engineering, University of Chittagong</institution>
          ,
          <addr-line>Chattogram-4331</addr-line>
          ,
          <country country="BD">Bangladesh</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Subjectivity detection (SD) refers to identifying whether a sentence in a news article conveys the author's personal opinion or presents an impartial, factual statement. SD plays a significant role in fact-checking, sentiment analysis, and information extraction. However, detecting subjectivity remains challenging due to complex grammatical structures, the multifaceted nature of language, and the nuanced ways in which opinions can be expressed. To advance research in this area, the CheckThat! 2025 lab has launched a shared task aimed at developing automatic systems for subjectivity detection across monolingual, multilingual, and zero-shot scenarios. In this study, we present a multilingual transformer-based approach tailored to both multilingual and zero-shot subjectivity detection. Our method utilizes contextualized representations from pre-trained transformers and is ifne-tuned using Focal Loss, which emphasizes harder-to-classify examples during training. Experimental results demonstrate the efectiveness of our approach, which achieved competitive performance in the shared task, most notably, securing first place in the zero-shot Ukrainian subjectivity detection track.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;multilingual subjectivity detection</kwd>
        <kwd>zero-shot subjectivity detection</kwd>
        <kwd>transformer</kwd>
        <kwd>focal loss</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Subjectivity encompasses viewpoints, evaluations, or conclusions shaped by individual perceptions,
emotions, opinions, or biases [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Identifying whether a textual content conveys the author’s subjective
viewpoint has become a key area of research in natural language processing (NLP), due to its
wideranging applications such as fact-checking [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ], claim detection [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and sentiment analysis [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Although initial investigations in this domain were largely centered on the English language, recent
eforts have increasingly expanded into multilingual contexts [
        <xref ref-type="bibr" rid="ref7">7, 8</xref>
        ]. Deep learning techniques have
demonstrated superior performance in subjectivity detection tasks [
        <xref ref-type="bibr" rid="ref2">2, 9</xref>
        ] compared to traditional methods
that rely on lexical or syntactic features [10, 11]. In multilingual settings, some approaches adopt a
two-step pipeline, where texts are first translated into English and then processed using monolingual
models trained on English data [
        <xref ref-type="bibr" rid="ref2">2, 9</xref>
        ]. Alternatively, other studies utilize multilingual models directly,
enabling subjectivity detection without intermediate translation [12, 13].
      </p>
      <p>To foster progress subjectivity detection in multiple languages, Ruggeri et al. present a shared task as
part of CheckThat! 2025 at CLEF 2025 [14]. The task is organized into three distinct subtasks. Subtask 1
focuses on subjectivity detection in a monolingual setting, where both training and evaluation occur
within the same language. This subtask includes five languages: Arabic, Bulgarian, English, German,
and Italian. Subtask 2 addresses the multilingual scenario, requiring systems to be trained and tested on
a combination of texts from diferent languages. In contrast, subtask 3 explores the zero-shot setting,
where the model is trained on a subset of languages and evaluated on previously unseen ones. For
this purpose, the organizers have added four additional test languages: Greek, Polish, Romanian, and
Ukrainian. To demonstrate a clear view of the task definition, we articulate two examples in Table 1 for
the English language.</p>
      <sec id="sec-1-1">
        <title>Sentence</title>
        <p>The Immigration Invasion symbolizes a lot about the present state of the
immigration debate.</p>
        <p>As has been pointed out elsewhere, the cost of rent is considerably greater
than even the spiralling cost of energy bills.</p>
      </sec>
      <sec id="sec-1-2">
        <title>Label</title>
        <sec id="sec-1-2-1">
          <title>SUBJ OBJ</title>
          <p>To tackle the problem of distinguishing between subjective and objective sentences across multilingual
and zero-shot contexts, we propose a system in this paper. Our system harnesses a multilingual
transformer to extract contextualized features from the given sentence. We utilize focal loss, which
emphasizes harder-to-classify examples, helping the model focus on them during training.</p>
          <p>The remainder of this paper is structured as follows: Section 2 presents the architecture and
components of our proposed subjectivity detection system. Section 3 outlines the experimental framework,
including datasets, training configuration, and evaluation metrics. Section 4 provides an analysis and
discussion of the results. Finally, Section 5 concludes the work and discusses possible directions for
future research.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. System Overview</title>
      <p>This section outlines our approach for CheckThat! 2025 Task 1, which involves determining whether a
given sentence reflects the subjective view of the author behind it or presents an objective perspective
on the topic. The competition includes three distinct settings, we have participated in the latter two.
The second setting focuses on detecting subjectivity in textual data across multiple languages, whereas
the third requires training on multiple languages and evaluating performance on previously unseen
ones. An overview of our proposed framework is depicted in Figure 1.</p>
      <p>Given a sentence from a news article, our approach begins by generating contextual
representations using a multilingual transformer encoder. Specifically, we fine-tune a multilingual variant of
DeBERTa [15] to capture semantic features relevant to the task. The contextualized embedding
corresponding to the [CLS] token is passed into a classification head to produce unnormalized output scores
(logits). To compute the loss, we employ the Focal Loss function [16], which incorporates the predicted
logits, the ground-truth labels, and a focusing parameter,  that emphasizes harder-to-classify examples.
The model parameters are subsequently updated through backpropagation to improve generalization on
the training data. For zero-shot settings, where training is performed in some languages and evaluation
occurs on previously unseen ones, we utilize our fine-tuned model in subtask 2.
2.1. Multilingual Transformer
In contrast to traditional sequence modeling approaches like LSTMs [17] and convolutional
networks [18], transformer-based architectures are well-suited for modeling long-range relationships
within sequences. Their use of multi-head self-attention mechanisms combined with positional
encoding allows for richer token-level interactions and improved contextual embedding.</p>
      <p>In our study, we evaluate four prominent multilingual transformer-based models: mBERT [19],
XLM-RoBERTa [20], RemBERT [21], and multilingual DeBERTa [22]. These models are assessed based
on their efectiveness in the multilingual setting, and the highest-performing model is further used for
the zero-shot learning setup. Among the candidates, multilingual DeBERTa demonstrates the most
competitive performance. Consequently, we adopt it as the encoder in our proposed framework. For
implementation, we utilize the publicly available checkpoint from Hugging Face1.
1https://huggingface.co/microsoft/mdeberta-v3-base
[Tok1]</p>
      <p>E1</p>
      <p>Tokenizer
…
…
…
Classifier
Focal Loss</p>
      <p>Labels
[Tokn]</p>
      <p>En
[SEP]
E[SEP]
…
…
…
…
C</p>
      <p>T1</p>
      <p>Tn</p>
      <p>T[SEP]
[CLS]</p>
      <p>E[CLS]
la re
u m
g r
iilltn fsno
u a
M rT</p>
      <p>γ</p>
      <p>Multilingual DeBERTa is a variant of the DeBERTa [23] architecture designed for cross-lingual
representation learning. DeBERTa, short for Decoding-enhanced BERT with Disentangled Attention,
enhances the standard transformer design by decoupling the position and content embeddings and
incorporating a relative position bias in the self-attention mechanism. The third version of the model
introduces further improvements by integrating pre-layer normalization and optimized training
objectives [15]. The multilingual version is pre-trained on a large-scale multilingual corpus covering over
100 languages, enabling it to capture semantic nuances across diverse linguistic contexts. We adapt this
model to our classification task through fine-tuning, allowing it to learn task-specific representations.
2.2. Focal Loss
To address the challenge of class imbalance, we adopt Focal Loss [16] in this task. Traditional
crossentropy loss [24] tends to be dominated by easily classified examples, which can hinder the model’s
ability to learn from harder, misclassified instances. Focal Loss mitigates this by introducing a modulating
factor that down-weights the contribution of well-classified samples, thereby encouraging the model to
focus more on dificult cases.</p>
      <p>Let the contextual embedding obtained from the encoder be denoted by . The classifier then maps
this embedding to unnormalized scores (logits) as follows:</p>
      <p>Logit =  ⊤ + 
(1)
where  ∈ R×  and  ∈ R are the parameters of the classification layer.  and  represent the
number of target classes and the hidden size of the model respectively.</p>
      <p>Let the th class be the true class label for a given input sample. Then, the predicted probability for
the true class, denoted as , is computed using the softmax function:</p>
      <p>We now compare the standard Cross-Entropy Loss and the Focal Loss for this class:
Logit
 = ∑︀=1 Logit
ℒCE() = − log()
ℒFL() = − (1 − ) log()
(2)
(3)
(4)
where  is the focusing parameter. Therefore, the focal loss augments the standard cross-entropy
formulation by introducing a factor (1 − ) , where  is the predicted probability for the correct class.
The focusing parameter  controls the degree to which well-classified examples are down-weighted.
Higher values of  reduce the relative loss assigned to correctly predicted examples, thus placing greater
emphasis on learning from harder, misclassified samples.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>3.1. Dataset Overview
To assess the performance of submitted systems for subjectivity detection in the CheckThat! 2025 Task 1,
the organizers provided an annotated benchmark dataset developed based on the annotation framework
by Antici et al [25]. The dataset includes five languages for both mono- and multilingual settings, with
an additional four languages incorporated for the zero-shot setting. The distribution of data samples
across these configurations is summarized in Table 2. Since no dedicated multilingual dataset was
available, we constructed one by aggregating the monolingual training and development sets from each
language into a unified multilingual training set. The dev-test sets were similarly merged to form a
multilingual development set. Our analysis of this aggregated dataset indicates a class distribution of
37.46% subjective instances and 62.54% objective instances, highlighting a significant class imbalance in
the training data. For zero-shot scenarios, we employed the model trained on this combined multilingual
dataset. During the evaluation phase, we combined the training and development sets to enhance model
learning and evaluated its performance on the unseen test set provided in the CodaLab competition2.
3.2. Parameter Settings
This section presents the system setup for our submission to the CheckThat! 2025 Task 1. We fine-tune
the multilingual DeBERTa model available through the Hugging Face Transformers library [26]. All
experiments are executed on Google Colab [27] using an NVIDIA T4 GPU. The random seed is fixed at
66 to ensure consistent results across runs.</p>
      <p>We use the AdamW optimizer for training, which incorporates weight decay to improve generalization.
To handle class imbalance, we adopt the Focal Loss function, setting the focusing parameter  = 1.
These and other hyperparameters were selected based on extensive experimentation across a range of
values. In the final configuration, a batch size of 8, learning rate of 3 × 10− 5, and 3 training epochs
provide optimal performance in our setting.</p>
      <sec id="sec-3-1">
        <title>Arabic</title>
      </sec>
      <sec id="sec-3-2">
        <title>Bulgarian</title>
      </sec>
      <sec id="sec-3-3">
        <title>English</title>
      </sec>
      <sec id="sec-3-4">
        <title>German</title>
      </sec>
      <sec id="sec-3-5">
        <title>Italian</title>
      </sec>
      <sec id="sec-3-6">
        <title>Multilingual</title>
      </sec>
      <sec id="sec-3-7">
        <title>Greek</title>
      </sec>
      <sec id="sec-3-8">
        <title>Polish</title>
      </sec>
      <sec id="sec-3-9">
        <title>Romanian</title>
      </sec>
      <sec id="sec-3-10">
        <title>Ukrainian</title>
      </sec>
      <sec id="sec-3-11">
        <title>SUBJ OBJ Total</title>
      </sec>
      <sec id="sec-3-12">
        <title>SUBJ OBJ Total</title>
      </sec>
      <sec id="sec-3-13">
        <title>SUBJ OBJ Total</title>
      </sec>
      <sec id="sec-3-14">
        <title>SUBJ OBJ Total</title>
      </sec>
      <sec id="sec-3-15">
        <title>SUBJ OBJ Total</title>
      </sec>
      <sec id="sec-3-16">
        <title>SUBJ OBJ Total</title>
      </sec>
      <sec id="sec-3-17">
        <title>Total</title>
      </sec>
      <sec id="sec-3-18">
        <title>Total</title>
      </sec>
      <sec id="sec-3-19">
        <title>Total</title>
        <p>Total
3.3. Evaluation Measures
To evaluate the performance of the participants’ proposed systems, the organizers employ the
macroaveraged F1 score [28], which is particularly suitable for datasets exhibiting a long-tail distribution.
This metric provides a balanced assessment by computing the harmonic mean of precision and recall
across all classes.
3.4. Results and Analysis
In this section, we present an evaluation of the CSECU-Learners system developed for the subjectivity
detection task in news articles as part of CheckThat! 2025. Table 3 compares the performance of our
approach with selected participant systems under the multilingual setting on the test data. Our system
attained a macro-averaged F1 score of 0.7321, securing the 4th position in this task. Additionally, Table
3 includes the results for the zero-shot scenario, marked with the prefix “Zero-”, where the system was
tested on Greek, Polish, Romanian, and Ukrainian. The CSECU-Learners model consistently achieved
competitive results across all evaluated languages. These outcomes underscore the robustness and
generalization capability of our approach in both multilingual and zero-shot subjectivity detection
settings.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>This section examines the influence of varying the focusing parameter  in the Focal Loss function on
our system’s performance. As illustrated in Figure 2, we evaluate the model using diferent  values
from the set {0, 1, 2, 3}. The figure presents the F1 scores for both the Subjective (SUBJ) and Objective
(OBJ) classes, along with their macro-averaged F1 score. When  = 0, the modulating factor (1 − )
becomes 1, making the Focal Loss equivalent to the standard Cross-Entropy (CE) Loss. Under this
setting, the model achieves F1 scores of 0.59 for the SUBJ class and 0.85 for the OBJ class, resulting in a
macro F1 score of 0.72.</p>
      <p>Among the tested values, the highest macro F1 score (0.73) is observed when  = 1, while the
lowest (0.70) occurs at  = 3. These results suggest that setting  = 1 provides the optimal balance,
improving the F1 score for the subjective class by approximately 4% and the macro F1 score by 1% over
the Cross-Entropy baseline. This improvement highlights the efectiveness of Focal Loss in guiding the
model’s attention toward more challenging examples—an aspect not adequately addressed by traditional
loss functions.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Direction</title>
      <p>In this paper, we introduce a method for identifying subjectivity in news articles across multilingual
and zero-shot contexts by leveraging fine-tuned multilingual transformer models. We employ Focal
Loss to focus the model’s learning on hard-to-classify instances during training, thereby enhancing its
generalizability. Empirical results validate the efectiveness of our approach. In the multilingual setting,
our method achieved a competitive rank, placing 4th in the leaderboard. In the zero-shot scenario,
the system demonstrated strong performance, securing 1st place for Greek, 2nd for both Polish and
Romanian, and 3rd for Ukrainian.</p>
      <p>For future directions, we intend to explore advanced multilingual transformer architectures and
evaluate ensemble strategies that combine multiple transformer models. Given the class imbalance
in the dataset, we also plan to implement data augmentation techniques to enhance representation
across categories, intending to further improve system performance in both multilingual and zero-shot
subjectivity detection tasks.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors utilized ChatGPT and Grammarly for grammar and
spelling checks, paraphrasing, and rewording. After using these tools, the authors reviewed and edited
the content as needed and take full responsibility for the publication’s content.
M. Hasanain, J. Köhler, et al., Overview of the clef-2023 checkthat! lab: Task 2 on subjectivity
in news articles, in: 24th Working Notes of the Conference and Labs of the Evaluation Forum,
CLEF-WN 2023, CEUR Workshop Proceedings (CEUR-WS. org), 2023, pp. 236–249.
[8] J. M. Struß, F. Ruggeri, A. Barrón-Cedeño, F. Alam, D. Dimitrov, A. Galassi, G. Pachov, I. Koychev,
P. Nakov, M. Siegel, et al., Overview of the clef-2024 checkthat! lab task 2 on subjectivity in news
articles, in: CEUR Workshop Proceedings, volume 3740, CEUR-WS, 2024, pp. 287–298.
[9] M. Casanova, J. Chanson, B. Icard, G. Faye, G. Gadek, G. Gravier, P. Égré, Hybrinfox at checkthat!
2024-task 2: Enriching bert models with the expert system vago for subjectivity detection (2024).
[10] E. Gajewska, Eevvgg at checkthat! 2024: evaluative terms, pronouns and modal verbs as markers
of subjectivity in text, Faggioli et al.[22] (2024).
[11] P. Premnath, P. Subramani, N. R. Salim, B. Bharathi, Ssn-nlp at checkthat! 2024: From feature-based
algorithms to transformers: A study on detecting subjectivity, in: Conference and Labs of the
Evaluation Forum, 2024. URL: https://api.semanticscholar.org/CorpusID:271793774.
[12] A. Rodríguez, E. Golobardes, J. Suau, Tonirodriguez at checkthat!2024: Is it possible to use zero-shot
cross-lingual methods for subjectivity detection in low-resources languages?, in: Conference and
Labs of the Evaluation Forum, 2024. URL: https://api.semanticscholar.org/CorpusID:271851634.
[13] F. Leistra, T. Caselli, Thesis titan at checkthat!-2023: Language-specific fine-tuning of mdebertav3
for subjectivity detection, in: Conference and Labs of the Evaluation Forum, 2023. URL: https:
//api.semanticscholar.org/CorpusID:264441796.
[14] F. Ruggeri, A. Muti, K. Korre, J. M. Struß, M. Siegel, M. Wiegand, F. Alam, R. Biswas, W. Zaghouani,
M. Nawrocka, B. Ivasiuk, G. Razvan, A. Mihail, Overview of the CLEF-2025 CheckThat! lab task 1
on subjectivity in news article, in: G. Faggioli, N. Ferro, P. Rosso, D. Spina (Eds.), Working Notes
of CLEF 2025 - Conference and Labs of the Evaluation Forum, CLEF 2025, Madrid, Spain, 2025.
[15] P. He, J. Gao, W. Chen, Debertav3: Improving deberta using electra-style pre-training with
gradientdisentangled embedding sharing, 2021. arXiv:2111.09543.
[16] T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollar, Focal loss for dense object detection, in: Proceedings
of the IEEE International Conference on Computer Vision (ICCV), 2017.
[17] M. Schuster, K. K. Paliwal, Bidirectional recurrent neural networks, IEEE transactions on Signal</p>
      <p>Processing 45 (1997) 2673–2681.
[18] I. Goodfellow, Y. Bengio, A. Courville, Y. Bengio, Deep learning, volume 1, 2016.
[19] J. Devlin, Multilingual bert readme document, https://github.com/google-research/bert/blob/
master/multilingual.md, 2018.
[20] A. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott,
L. Zettlemoyer, V. Stoyanov, Unsupervised cross-lingual representation learning at scale, in:
D. Jurafsky, J. Chai, N. Schluter, J. Tetreault (Eds.), Proceedings of the 58th Annual Meeting of the
Association for Computational Linguistics, Association for Computational Linguistics, Online,
2020, pp. 8440–8451. URL: https://aclanthology.org/2020.acl-main.747/. doi:10.18653/v1/2020.
acl-main.747.
[21] H. W. Chung, T. Févry, H. Tsai, M. Johnson, S. Ruder, Rethinking embedding coupling in pre-trained
language models, 2020. URL: https://arxiv.org/abs/2010.12821. arXiv:2010.12821.
[22] P. He, J. Gao, W. Chen, mdeberta-v3 - multilingual deberta v3 model, https://huggingface.co/
microsoft/mdeberta-v3-base, 2023.
[23] P. He, X. Liu, J. Gao, W. Chen, Deberta: Decoding-enhanced bert with disentangled attention,
arXiv preprint arXiv:2006.03654 (2020).
[24] A. Mao, M. Mohri, Y. Zhong, Cross-entropy loss functions: Theoretical analysis and applications,
in: International conference on Machine learning, PMLR, 2023, pp. 23803–23828.
[25] F. Antici, F. Ruggeri, A. Galassi, K. Korre, A. Muti, A. Bardi, A. Fedotova, A. Barrón-Cedeño, A
corpus for sentence-level subjectivity detection on English news articles, in: N. Calzolari, M.-Y. Kan,
V. Hoste, A. Lenci, S. Sakti, N. Xue (Eds.), Proceedings of the 2024 Joint International Conference
on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA
and ICCL, Torino, Italia, 2024, pp. 273–285. URL: https://aclanthology.org/2024.lrec-main.25/.
[26] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M.
Funtowicz, et al., Huggingface’s transformers: State-of-the-art natural language processing, arXiv
preprint arXiv:1910.03771 (2019).
[27] E. Bisong, Google colaboratory, in: Building machine learning and deep learning models on
google cloud platform: a comprehensive guide for beginners, Springer, 2019, pp. 59–64.
[28] M. Sokolova, G. Lapalme, A systematic analysis of performance measures for classification tasks,
Information processing &amp; management 45 (2009) 427–437.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kocoń</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gruza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bielaniewicz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Grimling</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kanclerz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Miłkowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kazienko</surname>
          </string-name>
          ,
          <article-title>Learning personal human biases and representations for subjective tasks in natural language processing</article-title>
          ,
          <source>in: 2021 IEEE International Conference on Data Mining (ICDM)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1168</fpage>
          -
          <lpage>1173</lpage>
          . doi:
          <volume>10</volume>
          .1109/ ICDM51629.
          <year>2021</year>
          .
          <volume>00140</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M. R.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Abir</surname>
          </string-name>
          , W. Zaghouani, Nullpointer at checkthat! 2024:
          <article-title>Identifying subjectivity from multilingual text sequence</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2407.10252. arXiv:
          <volume>2407</volume>
          .
          <fpage>10252</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L. L.</given-names>
            <surname>Vieira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L. M.</given-names>
            <surname>Jerônimo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Campelo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Marinho</surname>
          </string-name>
          ,
          <article-title>Analysis of the subjectivity level in fake news fragments</article-title>
          ,
          <source>in: Proceedings of the Brazilian Symposium on Multimedia and the Web</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>240</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>C. L. M.</given-names>
            <surname>Jeronimo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. B.</given-names>
            <surname>Marinho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Campelo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Veloso</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. S.</surname>
          </string-name>
          <article-title>da Costa Melo, Fake news classification based on subjective language</article-title>
          ,
          <source>in: Proceedings of the 21st International Conference on Information Integration and Web-based Applications &amp; Services</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rilof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <article-title>Learning extraction patterns for subjective expressions</article-title>
          ,
          <source>in: Proceedings of the 2003 conference on Empirical methods in natural language processing</source>
          ,
          <year>2003</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hofmann</surname>
          </string-name>
          ,
          <article-title>Recognizing contextual polarity in phrase-level sentiment analysis</article-title>
          ,
          <source>in: Proceedings of human language technology conference and conference on empirical methods in natural language processing</source>
          ,
          <year>2005</year>
          , pp.
          <fpage>347</fpage>
          -
          <lpage>354</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Galassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barrón-Cedeño</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Caselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kutlu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Antici</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>