<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>et al. (2021). Fighting the COVID-19 Infodemic: Modeling the Perspective
of Journalists</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Checker Hacker at CheckThat! 2024: Detecting Check-Worthy Claims and Analyzing Subjectivity with Transformers</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Syeda DuaE Zehra</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kushal Chandani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Muhammad Khubaib</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ahmed Ali Aun Muhammed</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Faisal Alvi</string-name>
          <email>faisal.alvi@sse.habib.edu.pk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abdul Samad</string-name>
          <email>abdul.samad@sse.habib.edu.pk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dhanani School of Science and Engineering, Habib University</institution>
          ,
          <addr-line>Karachi</addr-line>
          ,
          <country country="PK">Pakistan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>3370</volume>
      <fpage>236</fpage>
      <lpage>249</lpage>
      <abstract>
        <p>This paper represents our approach on the CheckThat! Lab designed to address the issue of disinformation. We participated in CheckThat! Lab Task 1 which focuses on identifying check-worthy claims in various forms of media, and Task 2 which targets the detection of subjective viewpoints in news articles. For both tasks we focused on the English dataset only. For task 1, after standard preprocessing, we used an ensemble approach where we combined two models, namely BERT-Base-Uncased and XLM-RoBERTa-Base in order to finetune and to find the average probabilities to determine a unified ensemble probability. For task 1 our F1 score was 0.696 and our rank was 14th in the English leaderboard. For task 2 we augmented our data after standard pre-processing using Google AI Studio and it's gemini-1.0-pro-latest model and then used the transformer-based model RoBERTa and ifnetuned it on the augmented dataset. For task 2, our macro F1 score was 0.7081 and our rank was 4th in the English leaderboard.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;CLEF CheckThat!</kwd>
        <kwd>fact-checking</kwd>
        <kwd>transformer models</kwd>
        <kwd>binary classification</kwd>
        <kwd>dataset</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The CLEF CheckThat! Lab [17] initiative is at the forefront of technological developments in automated
fact-checking, aiming to combat misinformation in the digital age. Misinformation poses significant
risks to public discourse and democratic processes, making the development of efective fact-checking
tools crucial. In the 2024 edition [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][17], the Lab focuses on two key tasks, each addressing critical
aspects of this challenge.
      </p>
      <p>The first one concentrated on assessing the check-worthiness of claims made in tweets and
other English texts. This involves identifying which statements require verification, thereby
prioritizing eforts, as not all claims can be fact-checked due to resource constraints, and determining
check-worthiness ensures that the most impactful misinformation is addressed promptly. The second
task aimed to distinguish subjective opinions from objective facts in the sentences of the news articles,
something essential for maintaining factual integrity and preventing spread of misinformation. By
accurately identifying and separating opinions from facts, we can improve the reliability of news
content and support informed public discourse. Unlike sentiment analysis, which would be focused on
identifying emotional tones, subjectivity analysis actually aims to improve the working of Task 1 as it
aims at discerning statements that may require verification (subjective) from those presenting factual
information (objective). By categorizing claims, fact-checkers can prioritize rigorous scrutiny for
subjective claims that may influence public opinion or require context evaluation, while focusing factual
verification eforts on objective claims backed by evidence. Together, these tasks help in providing
a comprehensive approach to validating information, preventing the spread of misinformation, and
upholding the credibility of information sources.</p>
      <p>Both tasks make use of binary classification and measure efectiveness through F1 scores, ensuring
precise and eficient validation of information, and preventing the spread of misinformation.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>
        2.1. Task 1
In recent years, the CLEF CheckThat! competition has showcased innovative approaches to claim
detection. Top teams have consistently relied on transformer-based models to enhance their systems.
Accenture, the top-ranked team in 2020, utilized a RoBERTa-based model, incorporating mean pooling
and dropout layers to improve generalization and reduce overfitting [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This strategy helped them
achieve strong performance over baseline models.
      </p>
      <p>
        In 2021, NLP&amp;IR@UNED explored several pre-trained transformer models, discovering that
BERTweet was the most efective on the development set. BERTweet, trained on 850 million English
tweets and 23 million COVID-19-specific tweets, excelled at identifying check-worthy claims [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ][
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
The second-place team, Fight for 4230, also used BERTweet but added a dropout layer and implemented
data augmentation techniques [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ][
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In the following year, PoliMi-FlatEarthers stood out by fine-tuning
GPT-3 for Task 1B. They combined deep learning with domain-specific customization to accurately
classify check-worthy claims [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ][
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Finally, in 2023, OpenFact leveraged a fine-tuned GPT-3 model,
utilizing a rich, annotated dataset of sentences from political debates and speeches. Their data-centric
approach, tailored specifically for fact-checking, helped them outperform other submissions[11][12].
Additionally, recent research has shown that models like FACT-GPT, which use synthetic data generated
by large language models for training, can closely match human judgment in identifying related claims,
highlighting the potential for AI tools to enhance the fact-checking process [20].
2.2. Task 2
This task has only appeared in one previous edition CheckThat! 2023. The top submissions used many
diferent models, the most used were BERT, RoBERTa, ChatGPT and GPT3. Team DWReCo [ 13][16]
got the best score in the English category. There approach involved augmenting the dataset using
GPT and then trained on RoBERTa. Two other teams also went with a data augmenting approach.
The overall best score on the multilingual dataset was achieved by Team NN [14][16] who used the
XLMRoBERTa model and trained it on the multilingual dataset. Team Thesis Titan [15][16] achieved
top positions in 4 languages. Their approach was to train the mDeBERTa model finetuned for each
specific languages seperately allowing them to achieve those scores. Many other teams also tried an
ensemble approach and got decent results.
      </p>
      <p>Similar to Team DWReCo’s strategy, FACT-GPT utilized large language models (LLMs) to
generate synthetic training data, enhancing the adaptability of models for specific tasks, which is
crucial for claim matching in fact-checking contexts. Like the approach of using XLMRoBERTa for
multilingual datasets, FACT-GPT demonstrated that fine-tuning language models on synthetic datasets
could improve classification accuracy and reduce computational costs. Both FACT-GPT and the
approaches in CheckThat! 2023 emphasize the importance of leveraging AI to assist and enhance
human expertise in the fact-checking process. [20]
3. Task 1</p>
      <sec id="sec-2-1">
        <title>3.1. Our Approach</title>
        <p>
          The goal of Task 1 [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] was to evaluate the necessity of fact-checking claims in tweets and transcriptions.
This typically requires either the expertise of professional fact-checkers or answers to several auxiliary
questions by human annotators.
        </p>
        <sec id="sec-2-1-1">
          <title>3.1.1. Data Preparation, Model Training and Evaluation</title>
          <p>We were provided with three datasets: [18] the training dataset, the dev dataset, and the test-dev dataset.
Later, we received the fourth dataset, the main test dataset which was unlabeled. Our initial modeling
used the following parameters with the BERT-base-uncased model:
• Batch size: 8 for both training and validation
• Learning rate: 2 × 10− 5
• Number of epochs: 3
After training, we used the model to process the test-dev dataset. The procedure involved:
1. Tokenizing the text entries.
2. Feeding the tokenized data into the model.
3. Converting the output logits to probabilities using a sigmoid function.
4. Classifying each entry as “Yes” or “No” based on a probability threshold of 0.5.
5. Collecting these classifications and their corresponding “Sentence_id” into a list for comparison
with the original labels.</p>
          <p>The approach achieved an F1 score of 0.80 on the test-dev dataset.</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>3.1.2. Modifications Made For Final Approach</title>
          <p>To improve results, we experimented with various models like Alberta, RoBERTa-base, XLM-RoBERTa,
and ELECTRA. The most significant improvement was observed with XLM-RoBERTa-base and
BERT-base-uncased. We then implemented an ensemble approach with these two models using the
following training configurations:
• Batch size: 16 for both training and validation
• Learning rate: 5 × 10− 5
• Number of epochs: 5
• Weight Decay: 0.005
Both trained models were evaluated on the test-dev dataset. Each text data point from the test dataset
was processed by both models, and their predictions were averaged to form a single ensemble probability.
This probability determined the final label (“Yes” or “No”), which was collected along with the text’s
unique identifier into a list.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>3.2. Results</title>
        <p>4. Task 2</p>
      </sec>
      <sec id="sec-2-3">
        <title>4.1. Our Approach</title>
        <p>
          The goal of Task 2 was to evaluate the Subjectivity of news articles and decide whether a sentence from
the news article [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ][19] was subjective or objective.
        </p>
        <sec id="sec-2-3-1">
          <title>4.1.1. Data Preparation, Model Training and Evaluation</title>
          <p>Our focus was on the English datasets: the training dataset, the dev dataset, and the test-dev dataset.
We used data augmentation to enhance our dataset as the train dataset was very small and the model
was not able to learn and efectively. We initially tried to augment the data using WordNet model and
the NTLK library. This method changed one word at random from the sentence and replaced it with its
synonyms.</p>
          <p>Our initial modeling was done using mDeBERTa and we used the following parameters:
• Batch size: 16 for both training and validation
• Learning rate: 5 × 10− 5
• Number of epochs: 6
• Warmup steps: 100
• Weight decay: 0.01
After training, we used the model to process the test-dev dataset. The procedure involved:
1. Processing the data and tokenizing the text entries.
2. Feeding the tokenized data into the model.
3. Converting the output logits to probabilities.
4. Classifying each entry as "Subj" or "Obj" using Sigmoid and Argmax.</p>
          <p>5. Collecting these classifications into a list for comparison with the original labels.
The approach achieved an F1 score of 0.76. This was achieved on the dataset that had been augmented
using the WordNet model and NTLK.</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>4.1.2. Modifications Made For Final Approach</title>
          <p>The current approach of changing the words with their synonyms at times did not portray the sentence
correctly. We then decided to use the Gemini Api using Google AI Studio and it’s ’gemini-1.0-pro-latest’
model and augmented our data. The approach used in this case was to create three similar sentences
for each of the "Objective" label and five similar sentences for each of the "Subjective" label. This
allowed us to have a more balanced dataset and allowed the model to have a better learning. We then
imported the dataset called "data", which has been uploaded on GitHub as well. While modifying,
we tried diferent models and even used the ensemble approach using models such as RoBERTa-base,
mDeBERTa, RoBERTa-xlm, and BERT-base, but the best results were achieved using RoBERTa-base
alone, hence we used that for our final submission using the following training configurations:
• Batch size: 64 for both training and validation
• Learning rate: 5 × 10− 6
• Number of epochs: 12
• Warmup steps: 100
• Weight decay: 0.01
The probability calculated went through a probability threshold of 0.5, based on which we determined
the final label ("Subj" or "Obj").</p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>4.2. Results</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Analysis</title>
      <p>We saw an overall drop in the scores of our model as it achieved high scores on the training set
compared to the dev set, dev-test, and test set scores, which indicates potential overfitting. This means
that the model did not perform well with new, unseen data.</p>
      <p>In Task 1, the validation loss showed a slight increase, which potentially contributed to the model’s
underperformance on new data. This increase in validation loss indicates that the model might
have started overfitting to the training data, thereby reducing its generalizability. As a result, when
the model was applied to the test set, it did not produce equally good results. Additionally, the
class imbalance in the dataset could have afected the model’s performance. With fewer instances
labeled as check-worthy compared to non-check-worthy, the model might have struggled to
accurately identify the check-worthy instances, leading to a lower overall score. The
preprocessing steps, while essential for cleaning and preparing the data, might not have fully addressed the
inherent variability in the text, further complicating the model’s ability to generalize well to unseen data.
Moreover in task 2, a reason for the low SUBJ F1 score on the test set suggests that the model had
dificulty with the "SUBJ" class. One possible reason for this could be that the features used for
identifying the "SUBJ" class may not be as strong or distinctive, or there might be more variability or
noise in the "SUBJ" class in the test set compared to the training set. Another reason for the low SUBJ
F1 could be the way we conducted our data augmentation. The approach that we used, created three
similar sentences for each of the "Objective" label and five similar sentences for each of the "Subjective"
label. Considering all the sentences might have been similar to the original one from which they were
made, we might have experienced over-fitting as the features of those sentences might have been
similar.</p>
    </sec>
    <sec id="sec-4">
      <title>6. Conclusion</title>
      <p>In conclusion, our detailed exploration in the CheckThat! Lab 2024 challenge demonstrated the
significant capabilities of transformer-based models in tasks of check-worthiness detection and
subjectivity analysis. For Task 1, the ensemble method combining XLM-RoBERTa and BERT-base-uncased
models efectively navigated the complexities of identifying check-worthy claims. By using a strategic
ensemble of predictions and applying a robust training regimen involving multiple epochs (up to 5) and
a learning rate of 5 × 10− 5.</p>
      <p>In Task 2, the fine-tuned RoBERTa model proved better than the other models we had tested
as it showed better performance in diferentiating the subjective from objective statements on the
devtest lfie, utilizing a refined approach with a lower learning rate ( 5 × 10− 6) and an increased number
of epochs (12), ensuring thorough learning. However, the performance, as indicated by a macro F1 score
of 0.7081 and an F1 score of 0.54 for the SUBJ class, suggests room for improvement. A deeper analysis
reveals that the model struggled with the "SUBJ" class, possibly due to weaker feature representation or
greater variability and noise in the test set. Another issue that might have been prevalent is of class
imabalance which might have led to weaker SUBJ identification. Future work could focus on
enhancing the feature set for this class and reducing noise through better data preprocessing and augmentation.
Data augmentation played a crucial role here, bolstering the dataset and thereby enhancing
the model’s ability to handle nuanced textual variations. While these results were promising, they
also suggest potential areas for further refinement to optimize performance, particularly in handling
more complex misinformation scenarios. These eforts exemplify the essential role of adaptive,
transformer-based architectures in leveraging deep learning for critical media literacy tasks in a
multilingual context.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>The authors would like to acknowledge the support provided by the Ofice Of Research (OoR) at Habib
University, Karachi, Pakistan for funding this project through internal research grant IRG-2235.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Barrón-Cedeño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          et al. (
          <year>2024</year>
          ). The CLEF-2024 CheckThat! Lab:
          <string-name>
            <surname>Check-Worthiness</surname>
            , Subjectivity, Persuasion, Roles, Authorities, and
            <given-names>Adversarial</given-names>
          </string-name>
          <string-name>
            <surname>Robustness</surname>
            . In: Goharian,
            <given-names>N.</given-names>
          </string-name>
          , et al.
          <source>Advances in Information Retrieval. ECIR 2024. Lecture Notes in Computer Science</source>
          , vol
          <volume>14612</volume>
          . Springer, Cham. https://doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -56069-9_
          <fpage>62</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Hasanain</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suwaileh</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Weering</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caselli</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaghouani</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barrón-Cedeño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nakov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Overview of the CLEF-2024 CheckThat! Lab Task 1 on Check-Worthiness Estimation of Multigenre Content</article-title>
          . In: Faggioli,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Galuščáková</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          , García Seco de Herrera,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds.), Working Notes of CLEF 2024 -
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2024</year>
          , Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Struß</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruggeri</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barrón-Cedeño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dimitrov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Galassi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Siegel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiegand</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Overview of the CLEF-2024 CheckThat! Lab Task 2 on Subjectivity in News Articles</article-title>
          . In: Faggioli,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Galuščáková</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          , García Seco de Herrera,
          <string-name>
            <surname>A</surname>
          </string-name>
          . (eds.), Working Notes of CLEF 2024 -
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2024</year>
          , Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Williams</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rodrigues</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Novak</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          : Accenture at CheckThat! 2020:
          <article-title>If you say so: Post-hoc fact-checking of claims using transformer-based models</article-title>
          . In: Cappellato et al.,
          <source>CLEF 2020</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Shaar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikolov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Babulkov</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barrón-Cedeño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elsayed</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasanain</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suwaileh</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haouari</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Da San Martino, G.,
          <string-name>
            <surname>Nakov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Overview of CheckThat! 2020 English:
          <article-title>Automatic Identification and Verification of Claims in Social Media</article-title>
          . In: Cappellato et al.,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>J. M.-R. Juan R. Martinez-Rico</surname>
            , L. Araujo,
            <given-names>NLP</given-names>
          </string-name>
          &amp;IR@UNED at CheckThat! 2021:
          <article-title>Checkworthiness estimation and fake news detection using transformer models</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fung</surname>
          </string-name>
          , Fight for 4230 at CLEF CheckThat!
          <year>2021</year>
          :
          <article-title>Domain-specific preprocessing and pretrained model for ranking claims by check-worthiness</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Shaar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasanain</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hamdan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ali</surname>
            ,
            <given-names>Z. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Haouari</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikolov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kutlu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kartal</surname>
            ,
            <given-names>Y. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , Da San Martino, G.,
          <string-name>
            <surname>Barrón-Cedeño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miguez</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beltrán</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elsayed</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nakov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Overview of the CLEF-2021 CheckThat! Lab: Task 1 on Check-Worthiness Estimation in Tweets and Political Debates</article-title>
          . In: Cappellato et al.,
          <source>CLEF</source>
          <year>2021</year>
          ,
          <volume>369</volume>
          -
          <fpage>392</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Agrestia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Hashemianb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Carmanc</surname>
          </string-name>
          , PoliMi-FlatEarthers at CheckThat!
          <year>2022</year>
          :
          <article-title>GPT-3 applied to claim detection</article-title>
          , in: Working Notes of CLEF 2022 -
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          , CLEF '
          <year>2022</year>
          , Bologna, Italy,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Nakov</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barrón-Cedeño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Da San Martino, G.,
          <string-name>
            <surname>Alam</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Míguez</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caselli</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kutlu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zaghouani</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shaar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mubarak</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikolov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kartal</surname>
            ,
            <given-names>Y. S.:</given-names>
          </string-name>
          <article-title>Overview of the CLEF-2022 CheckThat! Lab: Task 1 on Identifying Relevant Claims in Tweets</article-title>
          . In: Cappellato et al.,
          <source>CLEF</source>
          <year>2022</year>
          ,
          <volume>368</volume>
          -
          <fpage>392</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>