<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>cepanca_UNAM at CheckThat! 2025: A Language-driven BERT Approach for Detection of Subjectivity in News</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Iván Diaz</string-name>
          <email>diazrysivan@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jessica Barco</string-name>
          <email>jessbarco13@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joana Hernández</string-name>
          <email>joana.hernandez.rebollo@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Edgar Lee-Romero</string-name>
          <email>edgar.lee133@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gemma Bel-Enguix</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Instituto de Ingeniería, Universidad Nacional Autónoma de México</institution>
          ,
          <addr-line>Ciudad de México 04510</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Posgrado en Ciencias e Ingeniería de la Computación, Universidad Nacional Autónoma de México</institution>
          ,
          <addr-line>Ciudad de México 04510</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universidad Autónoma Metropolitana Unidad Iztapalapa</institution>
          ,
          <addr-line>Ciudad de México 09340</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Automatic subjectivity detection is a trending issue in Natural Language Processing. Responding to the challenge of finding models that are able to distinguish between objective and subjective segments, the CheckThat! Lab, which is part of CLEF 2025, invites in Task 1 to distinguish whether a sentence from a news article expresses the subjectivity of the author or not. The task has three settings: monolingual, multilingual and zero-shot. Our contribution focuses on monolingual classification in three of the proposed languages: English, Italian and German. The approach used is based on Transformers-based models. We have opted for BERT-base-uncased for English, BERT-base-italian-cased-sentiment for Italian and German BERT large for German. In our work, we have taken into account the lexical features, specifically the distribution of the diferent grammatical categories in each of the corpora. Despite the simplicity of our models, our results have been competitive, obtaining the second place in German, with a macro-F1 of 0.8280.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Subjectivity</kwd>
        <kwd>Objectivity</kwd>
        <kwd>Transformer Models</kwd>
        <kwd>BERT</kwd>
        <kwd>Binary Classification</kwd>
        <kwd>LLM</kwd>
        <kwd>CheckThat! 2025</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The semantic distinction between subjectivity and objectivity has traditionally been understood as a
dichotomy that distinguishes whether the author of a sentence appears immersed in the enunciation or
not [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. From the area of NLP, it is seen as the ‘aspects of language used to express opinions, evaluations,
and speculations’ [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. However, it is clear that the way humans communicate, talk about events and
experiences, needs to be unique and arises from our own experiences [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Therefore, subjectivity seems
unavoidable in human language communication.
      </p>
      <p>For the objective of the present study, we understand subjectivity as statements that rely on personal
opinions or emotions noticeable by the grammatical presence of an enunciator. The objective shall be
to explore the application of some methods of advanced natural language processing that are still being
developed in an attempt of improving the results of its functionality. This paper is an approximation to
the tools that attempt to serve in the mentioned improvement of the techniques.</p>
      <p>
        This work originates from the 2025 edition of the CheckThat! Lab [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], held ad CLEF 2025 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
CheckThat! 2025 is the eighth version of the competition. Task 1 [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] is devoted segment-level subjectivity
detection. It consists of a binary classification in which the system developed had to identify a text
sequence as subjective or objective. They posed three possible settings: a) monolingual, b) multilingual,
and c) zero shot. Our team decided to participate in the monolingual sub-tasks in the following languages:
a) German, b) Italian, and c) English. For all training, we utilized the datasets provided by the organizers
of the competition, which were composed of news articles.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>Automatic subjectivity detection has been a fundamental topic in Natural Language Processing for
years. Subjectivity detection, in particular, is an essential subtask of sentiment analysis because most
polarity detection tools are optimized to distinguish between positive and negative text. Subjectivity
detection, hence, ensures that factual information is filtered out and only opinionated information is
passed on to the polarity classifier.</p>
      <p>
        Early attempts to address the problem were based on the use of lexicons, dictionaries, and other
lexical resources that could detect words specifically related to subjectivity [
        <xref ref-type="bibr" rid="ref10 ref11 ref8 ref9">8, 9, 10, 11</xref>
        ].
      </p>
      <p>
        The need to create annotated corpora arose, especially with the emergence of machine learning
methods that would allow a supervised approach to the subject. Classical corpora include MPQA
(Multi-Perspective Question Answering)) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], and NewsSD-ENG, compiled from English news articles
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], which has been partially used in the task CheckThat!. One of the main problems of annotating
something like subjectivity is that it is a very subjective task. This is why some authors have suggested
the paradigm of disagreements [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>With language models and large language models, all NLP tasks have undergone a major development,
and subjectivity detection could not be diferent. Transformers are a technology where the main focus is
in providing pre-trained models to reduce computational cost, reduce carbon emissions, and save time
from training conventional models. BERT (Bidirectional Encoder Representations from Transformers)
is one of these pre-trained models that will provide substantial output with lax parameters to detect
subjectivity [15] [16]</p>
      <p>In particular, the BERT-based models are being successfully applied to the task. Satapathy et al. [17]
present a multi-task model for detecting and mutually supporting polarity and subjectivity detection. In
2024 CheckThat task, the teams that won in English [18], German [19], and Italian [20], applied BERT
models to approach the problem.</p>
      <p>In 2024 the JK PCIC UNAM [20] utilized Bert-based models focusing in two languages; English
and Italian in news articles to explore whether sentences were written with tints of subjectivity or
objectivity.</p>
      <p>Our methodology does not include the use of LLMs, although the model has been shown to achieve
competitive results [21].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>The languages that we selected for this shared task were English, Italian, and German. The following
analysis was made using the datasets for these languages.</p>
      <sec id="sec-3-1">
        <title>3.1. Analysis of the dataset</title>
        <p>First, we analyzed the label distribution (OBJ, SUBJ) to detect imbalanced classes, which could
compromise model training.</p>
        <p>As shown in Table 1, the German dataset exhibits a relatively balanced label distribution, whereas
the Italian and English datasets display a significant imbalance, with a pronounced bias toward the
OBJ class over SUBJ. This disparity may adversely afect model training, potentially leading to biased
predictions.</p>
        <p>After analyzing label distribution, we conducted a Part-of-Speech (POS) analysis to explore linguistic
patterns across the datasets. In all cases, the "Others" category was predominant, accounting for over
50% of POS tags. This indicates a higher frequency of grammatical function words compared to lexical
content (e.g., nouns, verbs).</p>
        <p>Additionally, we observed a disparity in adverb usage: posts labeled as subjective contained a higher
proportion of adverbs. This aligns with the linguistic function of adverbs, which often serve to express
opinions, emotions, or personal perspectives.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Machine Learning Models</title>
        <p>For this task, we have decided to establish a comparison between the performance of traditional ML
and Transformer-based methods. For this purpose, we have performed some tests with the classical
classification algorithms and we have taken Logistic Regression, since it is the one that has given the
best results. The LR has been used as a baseline to have a benchmark to compare the performance of
the fine-tuned BERT-based models we have applied. We vectorized the text with bag-of-words, and
used 3-grams and 4-grams. After some evaluations, we did not filter the stopwords</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Transformer Training</title>
        <p>We maintain the protocol established last year [20], ensuring comparability between diferent
editions. Our approach employed BERT (Bidirectional Encoder Representations from Transformers) as
the primary architecture for subjectivity classification tasks. BERT models, being pre-trained on
extensive text corpora, demonstrate particular efectiveness in sequence classification applications. For
language-specific implementations, we utilized: BERT-base-uncased for English,
BERT-base-italiancased-sentiment for Italian and German BERT large for German [22].</p>
        <p>German BERT large is a language model for German, released in 2020 by the creators of the original
German BERT and the dbmdz BERT. Pretrained on approximately 170 GB of text data, it leverages
diverse linguistic sources, with the OSCAR corpus being one of its most significant training datasets
[22]. Given OSCAR’s broad coverage of web-sourced texts, the model benefits from varied linguistic
patterns, making it particularly suitable for this task.</p>
        <p>The italian BERT sentiment model was pretrained by Neuraly AI from an instance of
bert-base-italiancased, fine-tuned with a corpora of 45k tweets to perform sentiment analysis in italian. Regardless of the
domain of the training dataset which was reported to be football, Neuraly AI claims eficiency in other
topics which proved to be an accurate promise when the model was presented with the competition’s
news datasets. [23]</p>
        <p>BERT-base-uncased is also a pretrained model trained in a self-supervised manner with two objectives:;
Masked language modeling (MLM) and Next Sentences Prediction (NSP). This way, the model learns
an inner representation of the English language that can then be used to extract features useful for
downstream tasks [15]. The main objective of our application is to classify pairs of sentences as either
positive or negative based on selected parameters. In this case, we focus on subjectivity detection,
using words with factual language as a starting point to categorize text as subjective or objective. This
approach is also commonly applied in sentiment analysis.</p>
        <p>All models were fine-tuned on the provided dataset, with hyperparameter optimization performed
exclusively on the training set to maintain evaluation integrity. The fine-tuning process focused on
adjusting the supervised classifier’s parameters, using: 4 training epochs, Batch size of 16, Maximum
input length of 256 tokens.</p>
        <p>Model performance was assessed using precision, recall, F1 score, and macro-averaged F1 score for
each experimental condition. We prioritize macro-F1 as our primary optimization metric due to its
robustness in addressing class imbalance issues inherent in subjectivity classification tasks.</p>
        <p>All experiments were conducted in Google Colab using GPU acceleration to handle the computational
demands of fine-tuning.
3.3.1. Model Comparison and Hyperparameter Optimization
We conducted a systematic performance comparison between our baseline Logistic Regression (LR)
model and BERT-based classifiers across all target languages (English, Italian, German). This dual-model
approach served to establish transformer performance gains over classical methods as we can see in
Table 3.</p>
        <p>The Transformer results, which we obtained through systematic hyperparameter tuning on the
organizers’ development datasets, are presented in Tables 4 (English), 5 (Italian), and 6 (German).</p>
        <p>For German and Italian, we observed that reducing the batch size to 16 while increasing the maximum
token length to 256 yielded superior performance. This improvement likely stems from the need to
preserve complete linguistic structures in longer sentences and the risk of losing critical contextual
information (and introducing bias) with shorter sequences.</p>
        <p>In contrast, English achieved optimal results with shorter sequences (128 tokens) and smaller batches
(16), suggesting diferent processing requirements for this language.</p>
        <p>The strong performance of the German model may be attributed to the model’s pretrained datasets
like OSCAR. While OSCAR’s web-sourced texts provide broad linguistic coverage, they may also
introduce challenges due to potential misinformation and inaccuracies inherent in internet content.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Analysis of the results</title>
      <p>We evaluated our models’ performance against the oficial shared task baseline using the organizers’
evaluation framework. The standardized scorer provided computes the following classification metrics:
• Accuracy:
• Macro-Precision (macro-P):
• Macro-Recall (macro-R):
• Macro-F1 (macro-F1):</p>
      <p>Accuracy =</p>
      <p>+  
  +   +   +  

macro-P = 1 ∑︁ ,

=1

macro-R = 1 ∑︁ ,

=1
where  =
where  =</p>
      <p>+</p>
      <p>+  

macro-F1 = 1 ∑︁  1,

=1
where  1 = 2</p>
      <p>× 
×  + 
• Class-specific metrics (e.g., SUBJ) :
– Precision (SUBJ-P):
– Recall (SUBJ-R):
– F1-score (SUBJ-F1):
SUBJ =
SUBJ =</p>
      <p>SUBJ
 SUBJ +  SUBJ</p>
      <p>SUBJ
 SUBJ +  SUBJ</p>
      <p>SUBJ × SUBJ
 1SUBJ = 2 × SUBJ + SUBJ
The following tables present our model’s performance against the oficial baseline for each target
language. Our results demonstrate that even with a simple approach, we achieved performance above
the organizers’ baselines. Key improvements are highlighted in the subsequent analysis.</p>
      <p>For English (Table 8), our model achieved 14th place out of 24 teams, showing modest gains in
accuracy (+1.27%) and recall (+0.57%) but significant challenges in subjective content detection
(SUBJPrecision: -21.53%). This performance pattern suggests that while our simple approach generalized well
for objective content, it struggled with English-specific subjective constructs, where more complex
systems excelled.</p>
      <p>Our Italian results secured us 9th place out of 15 teams, demonstrating remarkable improvements in
precision-oriented metrics, particularly for SUBJ classification (+49.15% precision gain). While the recall
decrease (-15.90%) indicates our model adopted a more conservative approach to subjective content
detection—correctly identifying positives but potentially missing marginal cases—this precision-focused
strategy proved efective in the competition context. The balanced performance across metrics (accuracy:
+7.19%, macro-F1: +8.55%) suggests our approach successfully negotiated the trade-of between false
positives and coverage.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and future work</title>
      <p>In this research, simple models have been used for the task of identifying binary subjectivity in English,
Italian, and German. After an exploratory analysis of the data, a logistic regression model was tested as
a baseline.</p>
      <p>Once the baseline was established, pretrained models such as BERT-base-uncased for English,
BERTbase-italian-cased-sentiment for Italian, and German BERT for German were employed for analysis
and classification. It was found that the Transformer-based models outperformed the traditional ones.
Moreover, despite the simplicity of the technique used, the results obtained were competitive. Our best
performance was second place in the German task.</p>
      <p>In the future, we are plan to employ other strategies and techniques, including the treatment of
corpus imbalance and the incorporation of linguistic elements related to subjectivity, such as sentiment
analysis or the use of adjectives.</p>
      <p>Additionally, we plan integrating reinforcement learning methods to the models to improve the
performance of the algorithms.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>I. Díaz thanks CONAHCYT scholarship program (CVU: 923309). This research was funded by UNAM,
PAPIIT project IG400325.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used DeepL and Grammarly in order to: Grammar and
spelling check and correct translation to English. After using these tools, the authors reviewed and
edited the content as needed and take full responsibility for the publication’s content.
[15] K. L. K. T. Jacob Devlin, Ming-Wei Chang, Bert: Pre-training of deep bidirectional transformers
for language understanding, Association for Computational Linguistics (2019).
[16] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin,
Attention is all you need, 2017. URL: https://proceedings.neurips.cc/paper_files/paper/2017/file/
3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
[17] R. Satapathy, S. Pardeshi, E. Cambria, Polarity and subjectivity detection with multitask
learning and bert embedding, 2022. URL: https://www.mdpi.com/1999-5903/14/7/191. doi:10.3390/
fi14070191.
[18] M. Casanova, J. Chanson, B. Icard, G. Faye, G. Gadek, G. Gravier, P. Égré, Hybrinfox at checkthat!
2024–task 2: Enriching bert models with the expert system vago for subjectivity detection, arXiv
preprint arXiv:2407.03770 (2024).
[19] M. R. Biswas, A. T. Abir, W. Zaghouani, Nullpointer at checkthat! 2024: Identifying subjectivity
from multilingual text sequence, CEUR Workshop Proceedings 3740 (2024) 361–368.
[20] K. Salas-Jimenez, I. Díaz, , H. Gómez-Adorno, G. Bel-Enguix, G. Sierra, Jk_pcic_unam at checkthat!
2024: Analysis of subjectivity in news sentences using transformers-based models, CEUR Workshop
Proceedings (2024).
[21] M. Shokri, V. Sharma, E. Filatova, S. Jain, S. Levitan, Subjectivity detection in english news using
large language models, in: Proceedings of the 14th Workshop on Computational Approaches to
Subjectivity, Sentiment, &amp; Social Media Analysis, 2024, pp. 215–226.
[22] B. Chan, S. Schweter, T. Möller, German’s next language model, Digital Library, Munich Digitization</p>
      <p>Center (2020).
[23] NeuralyIA, neuraly/bert-base-italian-cased-sentiment, https://huggingface.co/neuraly/
bert-base-italian-cased-sentiment, 2021. Accessed: 2024-05-24.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.-C.</given-names>
            <surname>Verstraete</surname>
          </string-name>
          ,
          <article-title>Subjective and objective modality: interpersonal and ideational functions in the english modal auxiliary system</article-title>
          ,
          <source>Journal of Pragmatics</source>
          <volume>33</volume>
          (
          <year>2001</year>
          )
          <fpage>1505</fpage>
          -
          <lpage>1528</lpage>
          . URL: https://www.sciencedirect.com/science/article/pii/S0378216601000297. doi:https://doi.org/ 10.1016/S0378-
          <volume>2166</volume>
          (
          <issue>01</issue>
          )
          <fpage>00029</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          , T. Wilson,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bruce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Learning subjective language</article-title>
          ,
          <source>Computational Linguistics</source>
          <volume>30</volume>
          (
          <year>2004</year>
          )
          <fpage>277</fpage>
          -
          <lpage>308</lpage>
          . URL: https://aclanthology.org/J04-3002/. doi:
          <volume>10</volume>
          .1162/ 0891201041850885.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <article-title>Tracking point of view in narrative</article-title>
          ,
          <source>Computational Linguistics</source>
          <volume>20</volume>
          (
          <year>1994</year>
          )
          <fpage>233</fpage>
          -
          <lpage>287</lpage>
          -308.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Basile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Caselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Balahur</surname>
          </string-name>
          , L.-W. Ku,
          <article-title>Editorial: Bias, subjectivity and perspectives in natural language processing</article-title>
          ,
          <source>Frontiers in Artificial Intelligence</source>
          Volume 5
          <article-title>-</article-title>
          <year>2022</year>
          (
          <year>2022</year>
          ). URL: https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.
          <year>2022</year>
          .
          <volume>926435</volume>
          . doi:
          <volume>10</volume>
          .3389/frai.
          <year>2022</year>
          .
          <volume>926435</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hafid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Korre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Muti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schellhammer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Setty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sundriyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Todorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Venktesh</surname>
          </string-name>
          ,
          <article-title>Overview of the CLEF-2025 CheckThat! Lab: Subjectivity, fact-checking, claim normalization, and retrieval</article-title>
          , in: J.
          <string-name>
            <surname>Carrillo-de Albornoz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Gonzalo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Plaza</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>García Seco de Herrera</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Mothe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Piroi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Spina</surname>
          </string-name>
          , G. Faggioli, N. Ferro (Eds.),
          <source>Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF</source>
          <year>2025</year>
          ),
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Faggioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          , D. Spina (Eds.), Working Notes of CLEF 2025 -
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2025</year>
          , Madrid, Spain,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Muti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Korre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Siegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Biswas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zaghouani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Nawrocka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ivasiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Razvan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mihail</surname>
          </string-name>
          ,
          <article-title>Overview of the CLEF-2025 CheckThat! lab task 1 on subjectivity in news article</article-title>
          ,
          <source>in: [6]</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>I.</given-names>
            <surname>Maks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vossen</surname>
          </string-name>
          ,
          <article-title>A lexicon model for deep sentiment analysis and opinion mining applications, Decision Support Systems 53 (</article-title>
          <year>2012</year>
          )
          <fpage>680</fpage>
          -
          <lpage>688</lpage>
          . URL: https://www.sciencedirect.com/science/article/ pii/S0167923612001364. doi:https://doi.org/10.1016/j.dss.
          <year>2012</year>
          .
          <volume>05</volume>
          .
          <issue>025</issue>
          ,
          <issue>1</issue>
          ) Computational Approaches to Subjectivity and
          <source>Sentiment Analysis 2) Service Science in Information Systems Research : Special Issue on PACIS</source>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Steinberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ebrahim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ehrmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hurriyetoglu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kabadjov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lenkova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Steinberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tanev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vázquez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Zavarella</surname>
          </string-name>
          ,
          <article-title>Creating sentiment dictionaries via triangulation</article-title>
          ,
          <source>Decision Support Systems</source>
          <volume>53</volume>
          (
          <year>2012</year>
          )
          <fpage>689</fpage>
          -
          <lpage>694</lpage>
          . URL: https://www.sciencedirect.com/science/article/ pii/S0167923612001406. doi:https://doi.org/10.1016/j.dss.
          <year>2012</year>
          .
          <volume>05</volume>
          .
          <issue>029</issue>
          ,
          <issue>1</issue>
          ) Computational Approaches to Subjectivity and
          <source>Sentiment Analysis 2) Service Science in Information Systems Research : Special Issue on PACIS</source>
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Rilof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          ,
          <article-title>Learning extraction patterns for subjective expressions</article-title>
          ,
          <source>in: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2003</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>112</lpage>
          . URL: https://aclanthology.org/W03-1014/.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>S.-M. Kim</surname>
            ,
            <given-names>E. H.</given-names>
          </string-name>
          <string-name>
            <surname>Hovy</surname>
          </string-name>
          ,
          <article-title>Automatic detection of opinion bearing words and sentences</article-title>
          ,
          <source>in: International Joint Conference on Natural Language Processing</source>
          ,
          <year>2005</year>
          . URL: https://api.semanticscholar. org/CorpusID:2423990.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wiebe</surname>
          </string-name>
          , T. Wilson,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cardie</surname>
          </string-name>
          ,
          <article-title>Annotating expressions of opinions and emotions in language, Language resources</article-title>
          and evaluation
          <volume>39</volume>
          (
          <year>2005</year>
          )
          <fpage>165</fpage>
          -
          <lpage>210</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F.</given-names>
            <surname>Antici</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Galassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Korre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Muti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fedotova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barrón-Cedeño</surname>
          </string-name>
          ,
          <article-title>A corpus for sentence-level subjectivity detection on English news articles</article-title>
          , in: N.
          <string-name>
            <surname>Calzolari</surname>
            , M.-
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Kan</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Hoste</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lenci</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Sakti</surname>
          </string-name>
          , N. Xue (Eds.),
          <source>Proceedings of the 2024 Joint International Conference on Computational Linguistics</source>
          ,
          <article-title>Language Resources and Evaluation (LREC-COLING 2024), ELRA</article-title>
          and
          <string-name>
            <given-names>ICCL</given-names>
            ,
            <surname>Torino</surname>
          </string-name>
          , Italia,
          <year>2024</year>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>285</lpage>
          . URL: https://aclanthology.org/
          <year>2024</year>
          .lrec-main.
          <volume>25</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>A. M. Davani</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Díaz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Prabhakaran</surname>
          </string-name>
          ,
          <article-title>Dealing with disagreements: Looking beyond the majority vote in subjective annotations</article-title>
          ,
          <source>Transactions of the Association for Computational Linguistics</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>92</fpage>
          -
          <lpage>110</lpage>
          . URL: https://doi.org/10.1162/tacl_a_00449. doi:
          <volume>10</volume>
          .1162/tacl_a_
          <fpage>00449</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>