<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>TIFIN at CheckThat! 2025: Reasoning-Guided Claim Normalization for Noisy Multilingual Social Media Posts</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Manan Sharma</string-name>
          <email>manan.sharma@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arya Suneesh</string-name>
          <email>arya.suneesh@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manish Jain</string-name>
          <email>manish.jain@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pawan Kumar Rajpoot</string-name>
          <email>pawan@tifin.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Prasanna Devadiga</string-name>
          <email>prasanna@askmyfi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bharatdeep Hazarika</string-name>
          <email>bharatdeep@askmyfi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ashish Shrivastava</string-name>
          <email>ashish.shrivastava@workifi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kishan Gurumurthy</string-name>
          <email>kishan.gurumurthy@workifi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anshuman B Suresh</string-name>
          <email>anshuman.suresh@askmyfi.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aditya U Baliga</string-name>
          <email>aditya@askmyfi.com</email>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>We address claim normalization for multilingual misinformation detection - transforming noisy social media posts into clear, verifiable statements across 20 languages. The key contribution demonstrates how systematic decomposition of posts using Who, What, Where, When, Why and How questions enables robust cross-lingual transfer despite training exclusively on English data. Our methodology incorporates finetuning Qwen3-14B using LoRA with the provided dataset after intra-post deduplication, token-level recall filtering for semantic alignment and retrieval-augmented few-shot learning with contextual examples during inference. Our system achieves METEOR scores ranging from 41.16 (English) to 15.21 (Marathi), securing third rank on the English leaderboard and fourth rank for Dutch and Punjabi. The approach shows 41.3% relative improvement in METEOR over baseline configurations and substantial gains over existing methods. Results demonstrate efective cross-lingual generalization for Romance and Germanic languages while maintaining semantic coherence across diverse linguistic structures.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;claim normalization</kwd>
        <kwd>misinformation detection</kwd>
        <kwd>multilingual NLP</kwd>
        <kwd>social media analysis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Misinformation represents the foremost global threat for 2025, according to the World Economic Forum’s
Global Risks Report [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], while false news spreads up to 10 times faster than accurate reporting on
social media platforms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Social media giants have recently abandoned traditional fact-checking
programs in favor of community-driven approaches [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], creating new gaps in verification systems
precisely when misinformation campaigns target everything from elections to disaster response. The
noisy nature of social media posts makes it challenging to identify important claims that require manual
fact-checking, forcing researchers to develop automated solutions for processing the overwhelming
volume of misleading content. Our work addresses this critical challenge through CheckThat! Lab CLEF
2025 Task 2: Claim Normalization, which focuses on transforming chaotic social media posts into clear,
verifiable statements across 20 languages. This text generation task requires systems to extract core
assertions from noisy posts and present them in normalized forms suitable for fact-checking pipelines,
representing a fundamental step toward scaling verification eforts to match the speed and volume of
misinformation spread.
      </p>
      <sec id="sec-1-1">
        <title>1.1. Task Overview</title>
        <p>
          CheckThat! Lab CLEF 2025 [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] Task 2 [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] introduces the problem of simplifying noisy social media
posts into normalized claims that fact-checkers process eficiently. The task operates across 20 languages
including English, Arabic, German, French, Spanish, Hindi and 14 others, requiring systems to handle
diverse linguistic structures and cultural contexts. Participants face two distinct settings: monolingual,
where training, development and test data exist for the same language and zero-shot, where only test
data exists for the target language. The monolingual setting covers 13 languages with full datasets,
while the zero-shot setting evaluates generalization across 7 languages including Dutch, Romanian,
Bengali, Telugu, Korean, Greek and Czech. Posts originate from various social media platforms including
Twitter, Reddit and Facebook, sourced from Google Fact-check Explorer to ensure real-world relevance.
Systems generate normalized claims evaluated using METEOR score, measuring the quality of simplified
text against human-annotated ground truth. This research addresses the practical challenge faced by
fact-checkers who must process thousands of posts daily, extracting verifiable claims from content laden
with hashtags, mentions, emojis and informal language that obscures the core assertions requiring
verification.
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        At its core, claim normalization is an abstractive generation task closely related to summarization, but
with key diferences. Sequence-to-sequence models like BART [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] or T5 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] have advanced
generalpurpose summarization. Controlled summarization techniques allow setting summary length or focus
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. However, generic summaries may omit critical facts or introduce hallucinations, making
them unreliable for fact-checking. For example, Kryściński et al. (2020) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] showed that abstractive
models often add contradictory information. Utama et al. (2022) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and Durmus et al. (2020) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
developed QA-based checks for factual consistency. Claim normalization instead prioritizes factual
precision and context-independence: the generated claim must be fully verifiable on its own. Sundriyal
et al. [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] note that unlike typical summaries, normalized claims “must be self-contained and verifiable”.
This means, for example, resolving entities or adding minimal context so that the claim cannot be
misunderstood when isolated (e.g. clarifying that “Bird” refers to the scooter company, rather than the
animal).
      </p>
      <p>
        In practice, many systems treat normalization as a specialized summarization. For instance, Reddy
et al. (2024) [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] recast document-level claim extraction as extractive summarization followed by
decontextualization: they extract central sentences and then use a QA-based model to expand them
into stand-alone claims. This approach yielded higher relevance (precision@1) and fact consistency
in their test cases. Similarly, models trained for text summarization (T5/BART/PEGASUS) have been
applied directly as baselines for normalization. However, the unique goal of preserving a single factual
assertion often calls for tailored strategies.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Model Architecture</title>
        <p>
          We employ Qwen3-14B [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] as our base model due to its strong multilingual capabilities and eficient
architecture for fine-tuning across diverse languages. Qwen3-14B demonstrates robust performance
on multilingual tasks, achieving 79.69 on the MMMLU benchmark, while maintaining computational
eficiency, making it well-suited for our cross-lingual claim normalization objectives. The model’s strong
multilingual foundation provides an ideal starting point for fine-tuning across diverse languages without
sacrificing performance on cross-lingual understanding tasks. We fine-tune the model using Low-Rank
Adaptation (LoRA) [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] with 4-bit quantization for memory eficiency. Our training configuration
includes:
        </p>
        <p>• LoRA rank  = 16, scaling factor  = 32, dropout rate 0.05
• Target modules: attention and projection layers
• Training epochs: 3
• Per-device batch size: 6, gradient accumulation steps: 4 (efective batch size: 24)
• Optimizer: paged AdamW 8-bit with learning rate 3 × 10− 4
• Precision: bfloat16 with gradient checkpointing enabled
• Hardware: Single NVIDIA A100 GPU (40GB VRAM)</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Data Preprocessing</title>
        <p>The CheckThat! Lab CLEF 2025 Task 2 dataset (Table 1) encompasses 26,399 instances across 20
languages, representing one of the most comprehensive multilingual collections for claim normalization
research. The dataset exhibits significant linguistic diversity, with English comprising the largest
subset (13,830 instances), followed by Spanish (4,336), Portuguese (2,183) and French (1,469). Thirteen
languages provide complete training, development and test splits for monolingual evaluation, ranging
from high-resource languages like English to lower-resource languages such as Tamil (252 instances) and
Polish (304 instances). Seven additional languages—Bengali, Czech, Greek, Korean, Dutch, Romanian
and Telugu—are included exclusively for zero-shot evaluation with 1,068 test instances total. Post
lengths vary dramatically across languages and cultural contexts, from concise Tamil posts averaging
26 words to verbose Czech posts averaging 332 words, while normalized claims maintain relative
consistency (8.87-19.85 words) across all languages.</p>
        <sec id="sec-3-2-1">
          <title>3.2.1. Data Cleaning and Quality Control</title>
          <p>We first address the inherent noise in social media posts through intra-post deduplication, identifying
and removing repeated sentences within individual posts using MD5 fingerprinting of normalized text
segments. This eliminates redundant content while preserving unique information. More critically, we
iflter post-claim pairs based on semantic alignment to ensure meaningful correlations. Using token-level
recall between posts and their corresponding normalized claims, we retain only pairs with recall scores
above 0.4, efectively removing instances where claims bear insuficient relation to their source posts.</p>
          <p>Table 2 illustrates representative cases where posts and claims exhibit poor semantic alignment,
justifying our recall-based filtering approach. The highlighted example demonstrates a particularly
egregious case where the post discusses health workers during COVID-19, while the assigned claim
addresses an unrelated conspiracy theory about empty body bags.</p>
        </sec>
        <sec id="sec-3-2-2">
          <title>3.2.2. Data Augmentation through 5W1H Framework</title>
          <p>
            To enhance the model’s reasoning capabilities and expand training signal, we augment each original
post-claim pair with structured 5W1H reasoning components [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ]. For every training instance, we
          </p>
          <p>Post Claim
Photo Before Landing Of PK-320 Image shows Pakistani plane moments before crash in</p>
          <p>Karachi in May 2020
Strong people these health workers for Covid 19 ... they Authorities planted empty body bags in ’fake’ pandemic
carry the dead bodies with one hand plot
AC MASJID MELEDAK, 2 JEMAAH MENINGGAL Photo shows a fatal mosque blast in Bangladesh
DUNIA AC MASJID MELEDAK, 2 JEMAAH
MENINGGAL DUNIA AC MASJID MELEDAK, 2 JEMAAH
MENINGGAL DUNIA None
Vladmir Putin has dropped 800 Tigers and lions across This photo shows a lion patrolling Russian streets
durthe country to push people to stay home..sana all Rus- ing coronavirus lockdown
sia: Containment:
"Say it...you stand with.....?? ZELENSKYY 2018 5 Photo shows Volodymyr Zelensky holding a jersey
fea@chrisskyarmy1 45" turing a swastika
systematically generate intermediate reasoning steps that decompose the post according to What
(subject/topic), Who (individuals/organizations), Where (location), When (timing), How (process) and
Why (causation). This augmentation transforms each simple post-claim pair into a rich training example
that includes both the reasoning process and the final normalized claim. The expanded format provides
the model with explicit guidance on how to systematically analyze social media posts before generating
claims, efectively multiplying the learning signal from each original training instance. The prompt
utilized has been described in Appendix A.</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>3.2.3. Dataset Composition</title>
          <p>The final preprocessed dataset consists exclusively of English-language posts and their corresponding
normalized claims, now enriched with structured reasoning annotations. We focus on English-language
content to ensure consistency in linguistic patterns and reduce complexity during the initial training
phase, while leveraging the base model’s strong multilingual and reasoning capabilities for potential
cross-lingual transfer during inference. The combination of quality filtering and 5W1H augmentation
results in a more robust training set that teaches the model both what to extract and how to reason
through the extraction process.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Context Augmentation and Retrieval</title>
        <p>
          Inspired by the GPT-RE framework for in-context learning in relation extraction [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], we implement a
retrieval-augmented approach to address context-deficient posts using dense embeddings. We index the
training set using FAISS [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ] with embeddings from OpenAI’s text-embedding-3-small model. For each
post, we retrieve the top-5 most similar posts based on cosine similarity. Posts identified as semantic
subsets of longer, more informative posts are replaced with their supersets during training. Following
the GPT-RE methodology, during inference, the top-5 similar posts serve as few-shot examples in the
prompt, providing contextual guidance for claim generation. This retrieval-based few-shot learning
approach enables the model to leverage relevant examples from the training data to better understand
the structure and style of efective claim normalization.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Final Dataset</title>
        <p>The final preprocessed dataset consists exclusively of English-language posts and their corresponding
normalized claims, now enriched with structured reasoning annotations. The combination of quality
ifltering and 5W1H augmentation results in a more robust training set that teaches the model both
what to extract and how to reason through the extraction process. We evaluated our approach across 13
languages: English, German, French, Spanish, Hindi, Marathi, Punjabi, Arabic, Polish, Dutch, Bengali,
Tamil and Telugu. Our primary focus centered on improving the English training set, with other
languages serving as cross-lingual evaluation benchmarks to assess model generalization capabilities.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Evaluation</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>
        We evaluate model performance using standard text generation metrics: BLEU [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], ROUGE-1,
ROUGE2, ROUGE-L [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], METEOR [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] and BERTScore [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. METEOR serves as our primary optimization
metric due to its emphasis on semantic similarity over exact lexical matching, which aligns better with
the goals of claim normalization.
      </p>
      <p>Our fine-tuned Qwen3-14B model demonstrates robust multilingual claim normalization capabilities
across 14 languages, achieving consistent performance despite training exclusively on English data.
The model exhibits strong generalization with ROUGE-1 F1 scores ranging from 2.26 (Bengali) to
46.98 (English) and METEOR scores spanning 15.21 (Marathi) to 41.16 (English). Notably, BERTScore
maintains relatively high consistency across languages (83.25-95.28), indicating that the model preserves
semantic coherence even when lexical overlap varies significantly. This suggests that our 5W1H
reasoning framework efectively transfers cross-lingually, enabling the model to extract factual claims
despite linguistic diferences.</p>
      <p>Romance Languages demonstrate exceptional performance, with Spanish (ROUGE-1 F1: 45.7,
METEOR: 39.06), French (40.57, 34.41), Italian (27.9, 36.76) and Portuguese (30.92, 23.31) achieving the
highest scores after English. This pattern indicates strong cross-lingual transfer within the Romance
family, likely due to shared linguistic structures and cognate relationships with Latin-derived vocabulary.</p>
      <p>Germanic Languages show moderate performance, with German achieving ROUGE-1 F1 of 30.58
and METEOR of 26.42, while Dutch records 24.89 and 17.2 respectively. The performance gap between
Germanic and Romance languages suggests that morphological and syntactic similarities to English
training data play a crucial role in transfer efectiveness.</p>
      <p>South Asian Languages exhibit variable performance patterns. Hindi achieves reasonable scores
(ROUGE-1 F1: 9.87, METEOR: 26.04), while Bengali and Marathi show limited lexical overlap but
maintain semantic coherence as evidenced by their BERTScore values (90.37 and 88.51 respectively).</p>
      <p>Arabic presents an interesting case with low lexical overlap scores (ROUGE-1 F1: 7.5) but high
semantic preservation (BERTScore: 93.46), indicating that while surface-level matching is limited, the
model successfully captures underlying claim semantics.</p>
      <p>
        Notably, our English results (ROUGE-1 F1: 46.98, METEOR: 41.16) substantially outperform the
CACN baseline [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] on their CLAN dataset (ROUGE-1 F1: 38.64, METEOR: 35.10), demonstrating the
efectiveness of our fine-tuning approach with structured reasoning.
      </p>
      <p>As shown in Table 4, our ablation study on English data reveals the substantial impact of each
methodological component. The baseline configuration without Chain-of-Thought reasoning or
fewshot retrieval achieves moderate performance (ROUGE-1 F1: 36.03, METEOR: 29.13). Introducing the
5W1H reasoning framework yields significant improvements across all metrics (ROUGE-1 F1: +4.23,
METEOR: +4.98), demonstrating that structured decomposition enhances claim extraction quality. The
addition of retrieval-augmented few-shot examples further amplifies performance substantially
(ROUGE1 F1: +6.72, METEOR: +7.05), with the combined approach achieving a 30.4% relative improvement in
ROUGE-1 F1 and 41.3% in METEOR compared to the baseline. This progression validates our hypothesis
that systematic reasoning combined with contextual examples enables more accurate and semantically
coherent claim normalization.</p>
      <p>Figure 2 illustrates the qualitative improvements achieved through our progressive enhancement
approach. The base model generates claims that closely mirror the original post structure, while the
addition of 5W1H reasoning produces more focused and coherent claims. The combination of structured
reasoning with retrieval-augmented examples yields the most concise and professionally formatted
normalized claims, demonstrating how each component contributes to improved claim quality.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Work</title>
      <p>We developed a comprehensive approach for multilingual claim normalization using fine-tuned
Qwen14B enhanced with structured 5W1H reasoning, retrieval-augmented few-shot prompting and semantic
ifltering techniques. Our results across 14 languages demonstrate that systematic decomposition
of social media posts enables efective cross-lingual transfer despite training exclusively on English
data, achieving competitive performance with third rank on the English leaderboard and fourth rank
on Dutch and Punjabi leaderboards of the CheckThat! 2025 Task 2. We observed that combining
structured reasoning frameworks with retrieval-based contextual examples captures the majority of
performance gains while maintaining computational eficiency. Future work includes language-specific
ifne-tuning to accomodate additional low-resource languages, testing generalizability across diferent
social media platforms and investigating integration with complete fact-checking pipelines for
end-toend misinformation detection systems.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used Claude (Anthropic) and ChatGPT in order
to: perform grammar and spelling checks, improve writing style and paraphrase and reword sections
for clarity and conciseness. After using these tool(s)/service(s), the author(s) thoroughly reviewed,
critically evaluated and edited all content to ensure accuracy and alignment with research objectives.
The author(s) take(s) full responsibility for the publication’s content.</p>
    </sec>
    <sec id="sec-7">
      <title>A. 5W1H Prompt</title>
      <p>Below are the prompt templates used for our 5W1H reasoning framework during model training and
inference.</p>
      <sec id="sec-7-1">
        <title>A.1. System Prompt</title>
        <p>Listing 1: System prompt for 5W1H claim normalization
You are an AI assistant that analyzes social media posts to extract factual claims. For each post,
you will analyze it using the WH questions framework and extract the main factual claim. Make
sure to reflect same language the post is mentioned in. If the post is in Hindi, respond in
Hindi. Your output must be valid JSON with the following structure:
{
"what": "Subject or topic of the post",
"who": "Key individuals, organizations, or groups mentioned",
"where": "Location information (if mentioned)",
"when": "Time information (if mentioned)",
"how": "Process information (if described)",
"why": "Reason or motivation information (if explained)",
"claim": "The single main factual crisp claim made in the post within 10-15 words"
}
If information for a particular field is not available, use an empty string. Also if information
is not clearly written, don’t assume anything from your end. Always stick to the post, don’t
add anything from your end. Keep things concise.</p>
      </sec>
      <sec id="sec-7-2">
        <title>A.2. User Prompt Template</title>
        <p>Listing 2: User prompt template for structured claim analysis
Carefully analyze the following social media post and answer each question thoughtfully to
identify the main factual claim:
Post: {post}
Please answer each of these questions, based only on what is stated in the post:
1. What is the subject/topic of the post?
2. Who is the post talking about (key individuals, organizations, or groups)?
3. Where is this situation taking place (if mentioned)?
4. When did this situation take place (if mentioned)?
5. How did the situation take place (if described)?
6. Why did the situation take place (if explained)?
After answering these questions, extract the main factual claim being made in the post in a single
, clear, concise sentence.</p>
        <p>Provide your response in the specified JSON format:
{
}
"what": "...",
"who": "...",
"where": "...",
"when": "...",
"how": "...",
"why": "...",
"claim": "..."</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>B. Configuration Examples</title>
      <p>Figure 3: More examples for illustrating progressive enhancement through 5W1H reasoning and retrieval</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Elsner</surname>
          </string-name>
          , G. Atkinson,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zahidi</surname>
          </string-name>
          ,
          <year>2025</year>
          . URL: https://reports.weforum.org/docs/WEF_Global_ Risks_Report_
          <year>2025</year>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dizikes</surname>
          </string-name>
          , Study:
          <article-title>On twitter, false news travels faster than true stories</article-title>
          ,
          <year>2018</year>
          . URL: https://news. mit.edu/2018/study
          <article-title>-twitter-false-news-travels-faster-true-stories-0308.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Calma</surname>
          </string-name>
          ,
          <article-title>Meta is leaving its users to wade through hate and disinformation, 2025</article-title>
          . URL: https: //www.theverge.com/
          <year>2025</year>
          /1/7/24338127/meta-end
          <article-title>-fact-checking-misinformation-zuckerberg.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hafid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Korre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Muti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schellhammer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Setty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sundriyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Todorov</surname>
          </string-name>
          ,
          <string-name>
            <surname>V. V.</surname>
          </string-name>
          ,
          <article-title>The clef-2025 checkthat! lab: Subjectivity, fact-checking, claim normalization, and retrieval</article-title>
          , in: C.
          <string-name>
            <surname>Hauf</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Macdonald</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Jannach</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Kazai</surname>
            ,
            <given-names>F. M.</given-names>
          </string-name>
          <string-name>
            <surname>Nardini</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Pinelli</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Silvestri</surname>
          </string-name>
          , N. Tonellotto (Eds.),
          <source>Advances in Information Retrieval</source>
          , Springer Nature Switzerland, Cham,
          <year>2025</year>
          , pp.
          <fpage>467</fpage>
          -
          <lpage>478</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struß</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dietze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hafid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Korre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Muti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schellhammer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Setty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sundriyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Todorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Venktesh</surname>
          </string-name>
          ,
          <article-title>Overview of the CLEF-2025 CheckThat! Lab: Subjectivity, fact-checking, claim normalization, and retrieval</article-title>
          , in: J.
          <string-name>
            <surname>Carrillo-de Albornoz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Gonzalo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Plaza</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>García Seco de Herrera</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Mothe</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Piroi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Spina</surname>
          </string-name>
          , G. Faggioli, N. Ferro (Eds.),
          <source>Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF</source>
          <year>2025</year>
          ),
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sundriyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <article-title>Overview of the CLEF-2025 CheckThat! lab task 2 on claim normalization</article-title>
          , in: G. Faggioli,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          , D. Spina (Eds.), Working Notes of CLEF 2025 -
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2025</year>
          , Madrid, Spain,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghazvininejad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mohamed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Stoyanov</surname>
          </string-name>
          , L. Zettlemoyer,
          <article-title>BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension</article-title>
          , CoRR abs/
          <year>1910</year>
          .13461 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1910</year>
          .13461. arXiv:
          <year>1910</year>
          .13461.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rafel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <article-title>Exploring the limits of transfer learning with a unified text-to-text transformer</article-title>
          , CoRR abs/
          <year>1910</year>
          .10683 (
          <year>2019</year>
          ). URL: http://arxiv.org/abs/
          <year>1910</year>
          .10683. arXiv:
          <year>1910</year>
          .10683.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Rush</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chopra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Weston</surname>
          </string-name>
          ,
          <article-title>A neural attention model for abstractive sentence summarization</article-title>
          , in: L.
          <string-name>
            <surname>Màrquez</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Callison-Burch</surname>
          </string-name>
          , J. Su (Eds.),
          <source>Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing</source>
          , Association for Computational Linguistics, Lisbon, Portugal,
          <year>2015</year>
          , pp.
          <fpage>379</fpage>
          -
          <lpage>389</lpage>
          . URL: https://aclanthology.org/D15-1044/. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D15</fpage>
          -1044.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kikuchi</surname>
          </string-name>
          , G. Neubig,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sasano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Takamura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Okumura</surname>
          </string-name>
          ,
          <article-title>Controlling output length in neural encoder-decoders</article-title>
          , in: J.
          <string-name>
            <surname>Su</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Duh</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          Carreras (Eds.),
          <source>Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing</source>
          , Association for Computational Linguistics, Austin, Texas,
          <year>2016</year>
          , pp.
          <fpage>1328</fpage>
          -
          <lpage>1338</lpage>
          . URL: https://aclanthology.org/D16-1140/. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D16</fpage>
          -1140.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Grangier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Auli</surname>
          </string-name>
          ,
          <article-title>Controllable abstractive summarization</article-title>
          , in: A.
          <string-name>
            <surname>Birch</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Finch</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Luong</surname>
          </string-name>
          , G. Neubig, Y. Oda (Eds.),
          <source>Proceedings of the 2nd Workshop on Neural Machine Translation and Generation</source>
          , Association for Computational Linguistics, Melbourne, Australia,
          <year>2018</year>
          , pp.
          <fpage>45</fpage>
          -
          <lpage>54</lpage>
          . URL: https://aclanthology.org/W18-2706/. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>W18</fpage>
          -2706.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>Kryscinski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>McCann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <article-title>Evaluating the factual consistency of abstractive text summarization</article-title>
          , in: B.
          <string-name>
            <surname>Webber</surname>
            , T. Cohn,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
          </string-name>
          , Y. Liu (Eds.),
          <source>Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>9332</fpage>
          -
          <lpage>9346</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .emnlp-main.
          <volume>750</volume>
          /. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .emnlp-main.
          <volume>750</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Utama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bambrick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Moosavi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Gurevych</surname>
          </string-name>
          , Falsesum:
          <article-title>Generating document-level NLI examples for recognizing factual inconsistency in summarization</article-title>
          , in: M.
          <string-name>
            <surname>Carpuat</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.-C. de Marnefe</surname>
            ,
            <given-names>I. V.</given-names>
          </string-name>
          <string-name>
            <surname>Meza Ruiz</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of the</source>
          <year>2022</year>
          <article-title>Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</article-title>
          , Seattle, United States,
          <year>2022</year>
          , pp.
          <fpage>2763</fpage>
          -
          <lpage>2776</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .naacl-main.
          <volume>199</volume>
          /. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2022</year>
          .naacl-main.
          <volume>199</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Durmus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Diab, FEQA: A question answering evaluation framework for faithfulness assessment in abstractive summarization</article-title>
          , in: D.
          <string-name>
            <surname>Jurafsky</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Chai</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Schluter</surname>
          </string-name>
          , J. Tetreault (Eds.),
          <article-title>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>5055</fpage>
          -
          <lpage>5070</lpage>
          . URL: https: //aclanthology.org/
          <year>2020</year>
          .acl-main.
          <volume>454</volume>
          /. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl-main.
          <volume>454</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sundriyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <article-title>From chaos to clarity: Claim normalization to empower fact-checking</article-title>
          , in: H.
          <string-name>
            <surname>Bouamor</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Pino</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          Bali (Eds.),
          <source>Findings of the Association for Computational Linguistics: EMNLP</source>
          <year>2023</year>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Singapore,
          <year>2023</year>
          , pp.
          <fpage>6594</fpage>
          -
          <lpage>6609</lpage>
          . URL: https://aclanthology.org/
          <year>2023</year>
          .findings-emnlp.
          <volume>439</volume>
          /. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2023</year>
          . findings-emnlp.
          <volume>439</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R. Gangi</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Chinthakindi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. R.</given-names>
            <surname>Fung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Small</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <article-title>A zero-shot claim detection framework using question answering</article-title>
          , in: N.
          <string-name>
            <surname>Calzolari</surname>
            ,
            <given-names>C.-R.</given-names>
          </string-name>
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Pustejovsky</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Wanner</surname>
          </string-name>
          , K.- S. Choi,
          <string-name>
            <surname>P.-M. Ryu</surname>
          </string-name>
          , H.
          <string-name>
            <surname>-H. Chen</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Donatelli</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Ji</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Kurohashi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Paggio</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Xue</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hahm</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>He</surname>
            ,
            <given-names>T. K.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Santus</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Bond</surname>
          </string-name>
          , S.-H. Na (Eds.),
          <source>Proceedings of the 29th International Conference on Computational Linguistics</source>
          ,
          <source>International Committee on Computational Linguistics</source>
          , Gyeongju, Republic of Korea,
          <year>2022</year>
          , pp.
          <fpage>6927</fpage>
          -
          <lpage>6933</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .coling-
          <volume>1</volume>
          .603/.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Men</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gao</surname>
          </string-name>
          , S. Liu,
          <string-name>
            <given-names>S.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <source>Qwen3 technical report</source>
          ,
          <year>2025</year>
          . URL: https://arxiv.org/abs/2505.09388. arXiv:
          <volume>2505</volume>
          .
          <fpage>09388</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wallis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Allen-Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chen</surname>
          </string-name>
          , Lora:
          <article-title>Low-rank adaptation of large language models</article-title>
          ,
          <source>CoRR abs/2106</source>
          .09685 (
          <year>2021</year>
          ). URL: https://arxiv.org/abs/2106.09685. arXiv:
          <volume>2106</volume>
          .
          <fpage>09685</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>5w1h extraction with large language models</article-title>
          ,
          <year>2024</year>
          . URL: https: //arxiv.org/abs/2405.16150. arXiv:
          <volume>2405</volume>
          .
          <fpage>16150</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wan</surname>
          </string-name>
          , F. Cheng,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kurohashi</surname>
          </string-name>
          , Gpt-re:
          <article-title>In-context learning for relation extraction using large language models</article-title>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2305.02105. arXiv:
          <volume>2305</volume>
          .
          <fpage>02105</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , M. Douze,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jégou</surname>
          </string-name>
          ,
          <article-title>Billion-scale similarity search with gpus</article-title>
          ,
          <source>CoRR abs/1702</source>
          .08734 (
          <year>2017</year>
          ). URL: http://arxiv.org/abs/1702.08734. arXiv:
          <volume>1702</volume>
          .
          <fpage>08734</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>K.</given-names>
            <surname>Papineni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Roukos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ward</surname>
          </string-name>
          , W.-J. Zhu,
          <article-title>Bleu: a method for automatic evaluation of machine translation</article-title>
          , in: P.
          <string-name>
            <surname>Isabelle</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Charniak</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Lin</surname>
          </string-name>
          (Eds.),
          <article-title>Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Philadelphia, Pennsylvania, USA,
          <year>2002</year>
          , pp.
          <fpage>311</fpage>
          -
          <lpage>318</lpage>
          . URL: https://aclanthology.org/P02-1040/. doi:
          <volume>10</volume>
          .3115/1073083.1073135.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>C.-Y. Lin</surname>
            ,
            <given-names>ROUGE:</given-names>
          </string-name>
          <article-title>A package for automatic evaluation of summaries, in: Text Summarization Branches Out, Association for Computational Linguistics</article-title>
          , Barcelona, Spain,
          <year>2004</year>
          , pp.
          <fpage>74</fpage>
          -
          <lpage>81</lpage>
          . URL: https://aclanthology.org/W04-1013/.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>S.</given-names>
            <surname>Banerjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lavie</surname>
          </string-name>
          ,
          <string-name>
            <surname>METEOR:</surname>
          </string-name>
          <article-title>An automatic metric for MT evaluation with improved correlation with human judgments</article-title>
          , in: J.
          <string-name>
            <surname>Goldstein</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lavie</surname>
            ,
            <given-names>C.-Y.</given-names>
          </string-name>
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          Voss (Eds.),
          <source>Proceedings of the ACL Workshop</source>
          on Intrinsic and
          <article-title>Extrinsic Evaluation Measures for Machine Translation and/or Summarization, Association for Computational Linguistics</article-title>
          , Ann Arbor, Michigan,
          <year>2005</year>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>72</lpage>
          . URL: https://aclanthology.org/W05-0909/.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kishore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Q.</given-names>
            <surname>Weinberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Artzi</surname>
          </string-name>
          , Bertscore:
          <article-title>Evaluating text generation with bert</article-title>
          ,
          <year>2020</year>
          . URL: https://arxiv.org/abs/
          <year>1904</year>
          .09675. arXiv:
          <year>1904</year>
          .09675.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>