<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Forum for Information Retrieval Evaluation, December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>of the First Shared Task on Prompt Recovery for Misinformation Detection (PRO MID 2025)</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gautam Kishore Shahi</string-name>
          <email>gautam.shahi@uni-due.de</email>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Asha Hegde</string-name>
          <email>hegdekasha@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shrey Satapara</string-name>
          <email>shreysatapara@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Parth Mehta</string-name>
          <email>parth.mehta126@gmail.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sandip Modha</string-name>
          <email>sjmodha@gmail.com</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Debasis Ganguly</string-name>
          <email>debasis.ganguly@glasgow.ac.uk</email>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Durgesh Nandini</string-name>
          <email>durgesh.nandini@uni-bayreuth.de</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>H L Shashirekha</string-name>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amit Kumar Jaiswal</string-name>
          <email>amit.chr@iitbhu.ac.in</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gabriella Pasi</string-name>
          <email>gabriella.pasi@unimib.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Thomas Mandl</string-name>
          <email>mandl@uni-hildesheim.de</email>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Parmonic</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Fujitsu Research</institution>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Indian Institute of Technology (BHU) Varanasi</institution>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Mangalore University</institution>
          ,
          <country country="IN">India</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Bayreuth</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>University of Duisburg-Essen</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>University of Glasgow</institution>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>University of Hildesheim</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>1</volume>
      <fpage>7</fpage>
      <lpage>20</lpage>
      <abstract>
        <p>With the increasing use of Large Language Models (LLMs) for content creation and information dissemination, the problem of understanding and alleviating misinformation and hallucinations in these systems has become an important research topic. However, the existing evaluation mechanisms do not account for the role of prompts, the efects of cross-lingual generation, and real-world events in the creation of misinformation. This was the primary motivation behind this shared task on Prompt Recovery for Misinformation Detection (PROMID), organised as a part of the 17th Forum for Information Retrieval Evaluation (FIRE) in 2025[1]. PROMID 2025 focused on three relatively unexplored problems: (i) Prompt recovery aiming at recovering the possible input prompt used for generating misinformation, (ii) Identification of factual incorrectness in machine-generated cross-lingual summaries, and (iii) Classification of misinformation in Twitter messages related to the February 2022 Russo-Ukrainian conflict. The shared task is divided into three subtasks, and we received a total of 16 submissions, with 11 teams finally submitting working notes. Out of these, task 1 received three submissions, with none of the teams submitting working notes, as all submissions were invalid. Task 2 received four submissions, with all teams submitting the working notes. Task 3 got 12 submissions, with 9 teams submitting the working notes. In this paper, we discuss the motivation behind the three tasks, their problem definitions, datasets and the participants' approaches.</p>
      </abstract>
      <kwd-group>
        <kwd>Prompt recovery</kwd>
        <kwd>LLMs</kwd>
        <kwd>Misinformation detection</kwd>
        <kwd>Textual summaries</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the past few years, the use of LLMs for information dissemination has increased exponentially.
It is now very common for various media outlets to have at least a LLM generated summary, and
in many cases entire articles are AI generated. Likewise, use of LLM in writing social media posts,
blogposts, etc has become commonplace. With this rise in the use of LLMS, also rise the challenges
related to unintentionally or intentionally generated false information being consumed on a large scale.
However, when it comes to deriving systemic insights regarding the phenomenon of misinformation</p>
      <p>CEUR
Workshop</p>
      <p>
        ISSN1613-0073
and hallucinations we are only scratching the surface. Most of the current work around combating
these issues is focused on the task of detecting misinformation, akin to traditional fact checking
tasks. However, there is a lack of study around the origins of such misinformation. For example, how
do specific prompts result in diferent types of misinformation, how well-designed are the internal
safeguards that are supposed to prevent an LLM from generating misinformation, etc. A further
study is warranted into the efects of the generation of cross-lingual misinformation, where the
source article (assuming there is one) and the resulting misleading article are in diferent languages.
In abstractive summarization, human evaluations have revealed substantial rates of unfaithful or
fabricated information, even for strong neural systems, with estimates of 20–30% of summaries
containing factual errors [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ]). Recent surveys further argue that hallucination is a structural
property of LLMs rather than an isolated bug, and highlight the need for systematic benchmarks and
detection methods, especially in high-stakes domains and downstream applications 2024 [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]).
      </p>
      <p>
        These concerns are amplified in multilingual and cross-lingual settings. For Indian languages
in particular, the last few years have seen significant progress in building summarization datasets
and models across mono, multi, and cross-lingual setups, including large-scale resources such as
PMIndiaSum and related corpora that span multiple Indian language families [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]). Shared tasks like
ILSUM[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and HASOC[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] have played a key role in this ecosystem by standardizing evaluation and
fostering community eforts specifically around Indo-European and Dravidian languages. The ILSUM
2023 edition, for instance, provided large-scale article–summary pairs across Hindi, Gujarati, Bengali
and Indian English, and included a subtask on detecting factual errors in LLM-generated summaries
[
        <xref ref-type="bibr" rid="ref10 ref8">8, 10</xref>
        ]). However, most existing benchmarks (for both global and Indic settings) primarily focus on
summary quality (e.g., fluency, adequacy, ROUGE) or treat factuality as a single binary label, without
ofering a fine-grained view of how hallucinations manifest. At the same time, there is increasing
evidence that hallucinations and factual inconsistencies behave diferently in multilingual and
cross-lingual summarisation than in purely monolingual English settings. Models must simultaneously
perform translation, content selection, and compression, which can introduce subtle errors such as
incorrect entity mappings, wrong numerical quantities, or cross-lingual semantic drift. Recent work
on multilingual and cross-lingual summarisation, including for Indian languages, has highlighted
these challenges and pointed to the need for specialised evaluation and mitigation strategies [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]).
More eforts need to be put into building benchmarks that explicitly target the factual correctness of
machine-generated cross-lingual summaries for Indic languages. Especially in realistic news scenarios
where such summaries are consumed by large populations and potentially contribute to the spread of
misinformation. The PROMID 2025 task is designed to address this gap.
      </p>
      <p>
        The task was organized as a part of the 17th Forum for Information Retrieval Evaluation (FIRE
2024)[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Traditionally FIRE has focused on shared tasks with the general cross-lingual, low-resource
setting focus of FIRE[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], geared towards but not limited to south asian languages. Some of the past
shared tasks include hate speech detection [
        <xref ref-type="bibr" rid="ref11 ref12 ref13 ref14 ref15">11, 12, 13, 14, 15</xref>
        ], sentiment analysis [
        <xref ref-type="bibr" rid="ref16 ref17 ref18">16, 17, 18</xref>
        ], fake news
detection [
        <xref ref-type="bibr" rid="ref19 ref20 ref21">19, 20, 21</xref>
        ], machine translation [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], mixed script IR [
        <xref ref-type="bibr" rid="ref23 ref24">23, 24</xref>
        ], Indian legal document retrieval
and summarization [
        <xref ref-type="bibr" rid="ref25 ref26 ref27 ref28 ref29 ref30">25, 26, 27, 28, 29, 30</xref>
        ], authorship attribution [
        <xref ref-type="bibr" rid="ref31 ref32">31, 32</xref>
        ], IR from microblogs [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ], IR
for software engineering [
        <xref ref-type="bibr" rid="ref34 ref35">34, 35</xref>
        ] among others. With the current shared task we aim to continue that
legacy of FIRE and contribute to the broader areas of hallucination and misinformation detection.
We also aim for more inclusiveness, introducing several Indo-European and Dravidian languages in
research areas that are often English-centric 2020 [
        <xref ref-type="bibr" rid="ref2 ref4">2, 4</xref>
        ]).
      </p>
      <p>
        We ofer three independent tasks related to these problems. Task 1 focuses specifically on the role
of prompts used to purposefully generate misinformation. It aims to explore the extent to which it
is possible to predict the intention behind generating a specific title for a given news article. While
determining the exact intent is a multifaceted study, we specifically focus on externalising this intent in
the form of the prompt that was used to generate the misleading title. To this extent, the first task
focuses on predicting the prompt that was used to generate a given misleading title from an article. This
is a heavily under-researched problem, which was introduced in a Kaggle competition by Google[
        <xref ref-type="bibr" rid="ref36">36</xref>
        ].
There have been some attempts at prompt recovery[
        <xref ref-type="bibr" rid="ref37 ref38">37, 38</xref>
        ] but a more systemic study and public
benchmark datasets are needed to address this problem. Task 1 of PROMID attempts at bridging that gap.
      </p>
      <p>
        Task 2 in this track, focuses on detecting factual incorrectness in machine-generated cross-lingual
summaries. Given a source article in English and a corresponding LLM-generated summary in an
Indian language, systems must determine whether the summary is factually correct and, when it is not,
assign one or more fine-grained error labels. We consider four broad types of factual incorrectness:
misrepresentation, inaccurate quantities or measurements, false attribution, and fabrication, chosen to
align with taxonomies of hallucination proposed in recent LLM surveys while remaining interpretable
for downstream users and annotators (Huang et al., 2024; Sahoo et al., 2024 [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]). We expand the
existing ILSUM datasets to include Dravidian languages such as Kannada, Tamil, Telugu and Malayalam.
      </p>
      <p>Task 3 focuses on the identification of misinformation in real life setting. This task aims to develop a
model capable of classifying tweets related to the Russo-Ukrainian conflict as either misinformation
(positive class) or non-misinformation (negative class). This is closer to a traditional fact-checking task,
in an automated setting.</p>
      <p>In the remainder of the paper, we first describe the dataset creation process and dataset statistics. We
then outline the oficial evaluation setup, followed by a summary of participating systems and their
performance. Finally we conclude by highlighting open challenges and directions for future work on
factuality, misinformation, and prompt recovery in Indian-language LLM applications.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Prompt recovery is a relatively unexplored research area, that has been slowly gaining traction in
past couple of years. However, the approaches remain limited. The problem was first introduced in a
Kaggle competition by Google[
        <xref ref-type="bibr" rid="ref36">36</xref>
        ]. The problem, however was not geared towards misinformation,
but rather towards generating stylistic variation of texts (e.g. write this in a Shakespearean style).
There have been some other attempts at prompt recovery[
        <xref ref-type="bibr" rid="ref37 ref38">37, 38</xref>
        ] but a more systemic study and public
benchmark datasets are needed to address this problem. Task 1 of PROMID attempts at bridging that gap.
      </p>
      <p>
        Compared to that the misinformation and hallucination detection problem has seen an ever-increasing
amount of interest. The problem is also closely related to other tasks such as fact-checking, detecting
AI-generated content, etc. To overcome this problem, a variety of models and benchmark platforms
have been proposed in the last few years. Singhal et al. [
        <xref ref-type="bibr" rid="ref39">39</xref>
        ] proposed a multilingual fact-checking
benchmark by filtering and binarising the X-Fact claim data for five languages (Spanish, Italian,
Portuguese, Turkish, and Tamil). They also compared the performance of five large language models
on various prompting techniques – zero-shot, English Chain-of-Thought, cross-lingual prompting, and
their respective self-consistency methods. They employed Statistical analysis, two-way ANOVA, and
correlation analysis to analyze the impact of models, methods, and language factors on performance.
The work by Chikkala et al. [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ] involves manually creating a high-quality, bilingual English–Telugu
fact-checking dataset through claim curation, cleaning, and annotation with veracity labels, gold
justifications, and multiple types of QA pairs, followed by careful translation and post-editing for
Telugu. Large language models are then benchmarked under four settings: simple zero-shot prompting
and three retrieval-augmented approaches (Naive RAG, Advanced RAG, which includes query rewriting,
re-ranking, and prompt compression, and Automatic Scraping of up-to-date news content). Claims are
verified and justifications generated by multiple LLMs; performance is evaluated in terms of F1 scores
for veracity classification and through a suite of automatic metrics for justification and QA quality. This
setup enables a systematic comparison of prompting versus retrieval-based methods across highand
low-resource languages.
      </p>
      <p>
        Further many shared tasks across evaluation platforms have been actively focusing on these tasks.
Numerous tasks have been ofered in platforms like CLEF, TREC, SemEval and FIRE. Some of the
recent editions of these tasks include Checkthat Lab in CLEF[
        <xref ref-type="bibr" rid="ref41 ref42 ref43">41, 42, 43</xref>
        ], LLM Capabilities and Fact
Checking and Knowledge verification themes at SemEval[
        <xref ref-type="bibr" rid="ref44 ref45">44, 45</xref>
        ], Lateral Reading Task as TREC[
        <xref ref-type="bibr" rid="ref46">46</xref>
        ]
and ILSUM task in FIRE[
        <xref ref-type="bibr" rid="ref47 ref48">47, 48</xref>
        ]. ILSUM track at FIRE is perhaps the most relevant task to the PROMID
task. In a way, PROMID 2025 is the spiritual successor of the ILSUM tasks, with the task 2 being a direct
continuation of task 2 in ILSUM 2024.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Task Definition</title>
      <p>
        PROMID 2025 consists of three independent subtasks[
        <xref ref-type="bibr" rid="ref49">49</xref>
        ], all related broadly to the theme of prompt
recovery or misinformation detection.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Task 1: Prompt Recovery from LLM-generated Misinformative Text</title>
        <p>In the Prompt Recovery task, participants are given a factual news article summary together with
a misinformation-containing title and are asked to predict the prompt that could have been used to
generate the title from the summary in an open-ended prompt generation setting. Unlike tasks that
classify misinformation types, Prompt Recovery focuses on reconstructing the instructional input (i.e.,
the prompt text) that drove the transformation from a grounded summary to a misleading title.</p>
        <sec id="sec-3-1-1">
          <title>Input and Output.</title>
          <p>Each instance contains:
• Input: a news summary  and a generated misinformation-containing title  ,
• Output: a natural-language prompt  such that a generator conditioned on (, ) could plausibly
produce  .</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Train/Test Setup.</title>
          <p>The training data consists of (, , )
triples, where  is the prompt used to produce
 from  . The test set contains only (, ) pairs, and systems must predict the missing prompt  .̂ While
each test instance has a single reference prompt, since the task is open-ended multiple prompts may be
semantically valid. The goal is not to generate the exact prompt, but a prompt that is semantically close
to the reference prompt.</p>
          <p>Repeated Summaries in the Test Set. In the test data, the same summary may appear in multiple
instances with diferent generated titles. This design reflects that a single article can be reframed in
multiple misleading ways, and it implies that diferent prompts were used to generate diferent titles
from the same underlying summary. Systems therefore must condition on both  and  to recover the
prompt, rather than relying on summary-only properties.</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Task Framing.</title>
          <p>We treat prompt recovery as a conditional generation problem:
 =̂ arg max  ( ∣ , ),</p>
          <p>where the goal is to generate a prompt that matches the dataset’s prompting style and content closely
enough to be identified as the original instruction.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Task 2: Misinformation Detection in LLM-generated Summaries</title>
        <p>
          Task 2 is the continuation of the task previously ofered in ILSUM 2024 and 2023 [
          <xref ref-type="bibr" rid="ref10 ref50 ref51 ref8">8, 50, 51, 10</xref>
          ]. The task
aims to identify incorrectness in machine-generated summaries, which is an important step in ensuring
the reliability and accuracy of information. This year the task included four Dravidian Languages
Kannada, Tamil, Telugu and Malayalam.
        </p>
        <p>We focus on four types of inaccuracies for this task, same as the previous editions:
• Misrepresentation: This involves presenting information in a way that is misleading, or that
gives a false impression. This could be done by exaggerating certain aspects, understating others,
or twisting facts to fit a particular narrative.
• Inaccurate Quantities or Measurements: Factual incorrectness can occur when precise
quantities, measurements, or statistics are misrepresented, whether through obfuscation (25 -&gt; dozens)
or through outright fudging.
• False Attribution: Incorrectly attributing a statement, idea, or action to a person or group is
another form of factual incorrectness.
• Fabrication: Making up data, sources, or events is a severe form of factual incorrectness. This
involves creating ”facts” that have no basis in reality.</p>
        <p>
          For this task, in the training data, every article has a corresponding summary associated with exactly
one of the four types of incorrectness mentioned above. However, during evaluation, participants are
asked to predict all possible labels associated with text summaries in test data, as one summary can
have multiple types of incorrectness. More details about the dataset creation are available in the dataset
paper for previous tasks[
          <xref ref-type="bibr" rid="ref52">52</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Task 3: Misinformation Detection In Social Media Texts</title>
        <p>
          The aim of this task is to develop a model capable of classifying tweets related to the Russo-Ukrainian
conflict as either misinformation (positive class) or non-misinformation (negative class). The dataset
consists of manually annotated tweets gathered through the Twitter API during the first year of the
conflict, as documented in previous work [
          <xref ref-type="bibr" rid="ref53 ref54">53, 54</xref>
          ]. Data gathering was carried out using the AMUSED
framework [
          <xref ref-type="bibr" rid="ref55">55</xref>
          ], which is designed for collecting social media posts from social media platforms
[
          <xref ref-type="bibr" rid="ref55 ref56">55, 56</xref>
          ]. A notable characteristic of this dataset is its substantial class imbalance, making it a useful
testbed for evaluating model robustness in scenarios where misinformation is comparatively rare. The
misinformation subset includes tweets authored in multiple languages, all of which can be translated or
processed by large language models to ensure comparability across linguistic contexts. Additionally,
misinformation-labeled tweets contain supplementary metadata such as account age and bot-likelihood
indicators; although these attributes are not included for the non-misinformation tweets by default,
participants may extract them independently if they wish to enrich their feature set. Model performance
is assessed using precision, recall, and weighted F1-score to provide a comprehensive evaluation under
imbalanced conditions.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Datasets and Evaluation</title>
      <p>In this section we discuss the datasets used and and the employed evaluation metric for each subtask.</p>
      <sec id="sec-4-1">
        <title>4.1. Datasets</title>
        <p>In the prompt recovery task, participants are provided 9950 training instances, each containing a
summary, a prompt and a title containing misinformation generated using the provided prompt. For the
test, a total of 800 test instances containing a summary and a title with misinformation were provided,
making it an open-ended prompt recovery task.</p>
        <p>Task 2 was ofered in four Dravidian languages named Telugu, Tamil, Kannada, and Malayalam
where participants are asked to predict one of the five categories (four misinformation categories or
correct). Detailed train and test dataset statistics for task 2 are available in Table 1.</p>
        <p>Task 3 was ofered in the English language and dataset comprises 36,174 non-misinformation tweets
and 778 misinformation tweets, highlighting a substantial skew toward the negative class. This
imbalance is evident in both splits: the training set is even more extreme, with 34,174 non-misinformation
tweets and only 364 misinformation tweets while the test set includes 2,000 non-misinformation tweets
and 414 misinformation tweets.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Evaluation</title>
        <p>
          For Task 1, we report ROUGE [
          <xref ref-type="bibr" rid="ref57">57</xref>
          ] as a standard lexical-overlap metric, and additionally use BERTScore
[
          <xref ref-type="bibr" rid="ref58">58</xref>
          ] to measure semantic similarity between the gold prompt used to generate the misinformation title
and the recovered prompt. This is important because prompt recovery often admits valid paraphrases
with low n-gram overlap, where ROUGE can underestimate performance. We therefore use ROUGE for
surface-form comparison and BERTScore to better capture meaning preservation in low-overlap but
semantically equivalent cases.
        </p>
        <p>For Task 2, formulated as a multi-class classification problem, we use Macro-F1 as the primary
metric. Since the label distribution is imbalanced—particularly between factually correct summaries and
instances belonging to specific misinformation types. Macro-F1 ensures that performance on minority
classes is not dominated by the majority class.</p>
        <p>For Task 3, we use weighted F1 due to the very high label imbalance in the tweet misinformation
dataset, and we compute scores via automatic evaluation hosted on Codabench1 to ensure consistent
and reproducible leaderboard ranking.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results and Methodologies</title>
      <p>In this section, we present the results for the three subtasks, as well as a summary of the approaches
that the participants used for their best-performing runs.
5.1. Task 1
5.2. Task 2
For task 1, all the submissions we received were invalid. Hence, there is no discussion around results or
the approaches used by participants in this section.</p>
      <p>The results for task 2 are included in table 2. We report the macro-averaged P, R, F and Accuracy for all
four languages. In total four teams participated in this task, however one team only submitted runs
for Tamil and Kannada. While each team were allowed to submit up to 3 runs, we only report the best
performing run for each team here.</p>
      <p>
        Below we give a brief overview of the systems developed by the participated teams for task 2.
gokul [
        <xref ref-type="bibr" rid="ref59">59</xref>
        ] - propose a fine-grained misinformation detection system for LLM-generated summaries in
Indian languages, targeting Subtask 2 of classifying factual inconsistencies. Fine-tuning of
IndicBERTv2
      </p>
      <p>Language</p>
      <p>Participant</p>
      <p>Tamil</p>
      <p>Telugu
Malayalam
Kannada</p>
      <p>MUCS
gokul
wangkongqiang
priyamsaha</p>
      <p>MUCS
gokul
priyamsaha
gokul</p>
      <p>MUCS
priyamsaha
gokul</p>
      <p>MUCS
priyamsaha
wangkongqiang
MLM-only on article-summary pairs is performed by stratified sampling with optimization for macro-F1
across five categories of misinformation. For Tamil, Telugu, Malayalam, and Kannada, separate
languagespecific models are trained with identical architectures and hyperparameters.</p>
      <p>wangkongqiang [60] - The authors developed multiple system variants, including a baseline Logistic
Regression (LR) classifier using TF-IDF features, a Dense Neural Network (DNN) trained on distributed
text embeddings, and a transformer-based architecture fine-tuned from microsoft-deberta-v3-base. They
conducted extensive hyperparameter tuning and ablation studies confirming the transformer-based
system consistently outperformed the other approaches across all languages in task 2.</p>
      <p>MUCS [61] - proposed a hybrid deep learning approach for misinformation classification using
BiLSTM, BiGRU, and Transformer+BiLSTM models with enhanced self-attention mechanisms to address
subtask 2. Their model includes personalized subword-level tokenization and strong multilingual
preprocessing to efectively preserve the morphological and syntactic diferences in Indian languages.
They trained the models utilizing class-weighted Cross-Entropy loss and Focal Loss, along with AdamW
optimizer, powerful learning rate schedules such as OneCycleLR, CosineAnnealingLR, and
mixedprecision training for better eficiency. Their proposed RNN models outperformed others by obtaining
strong 1st positions in the Tamil and Telugu tasks with F1 scores of 0.42 and 0.50, respectively, and
strong 2nd positions in the Kannada and Malayalam tasks with F1 scores of 0.48 and 0.40, respectively.</p>
      <p>priyamsaha [62] - proposed a few-shot learning model for misinformation classification, which is
designed to categorise errors in LLM-generated Kannada news summaries. It combines retrieval-based
context selection using sentence-transformers/all-MiniLM-L6-v2, Kannada few-shot prompting, and
per-label conditional log-probability scoring to assign one of the predefined misinformation categories.
For robustness, predictions from Mistral-7B-Instruct and BLOOM-7B1 are aggregated using a
Condorcetstyle ensemble.
5.3. Task 3
The details of results obtained from Task 3 is shown in Table 3 5.3.</p>
      <p>Below we give a brief overview of the systems developed by the participating teams for task 3.</p>
      <p>ClimateSense [63] addresses severe class imbalance in misinformation detection by augmenting
a RoBERTa-large transformer model with external Ukraine-related misinformation data from the
fact-checking observatory. To mitigate overfitting, the authors employ weighted cross-entropy loss and
weighted random sampling during fine-tuning. This data-driven enhancement significantly improves
recall and F1-score, showcasing the eficacy of targeted dataset expansion for underrepresented classes
in transformer-based classification tasks.</p>
      <p>Sarang [64] employs a multimodal strategy to address multilingual misinformation detection by first
translating all non-English tweets into English using a Gemma-3-12B to ensure linguistic homogeneity.
To counteract severe class imbalance, synthetic data augmentation is performed on the minority
misinformation class via the same LLM, generating four variants per sample. Then, a DeBERTa-v3-small
transformer is fine-tuned on the balanced and translated dataset to capture nuanced semantic patterns,
achieving robust performance in cross-lingual settings.
pratikpriyanshu [65] employs a hybrid fusion of multilingual transformer embeddings from
XLM-RoBERTa with hand-crafted linguistic features to capture both deep semantic context and
surface-level stylistic patterns indicative of misinformation. To address extreme class imbalance, the
system integrates class-weighted cross-entropy loss, decision threshold optimization, and stratified
cross-validation. Feature concatenation is followed by a sigmoid classifier, enhanced via dropout
and mixed-precision training for eficiency. This approach balances representational power with
interpretability.
deepish [66] employs a fine-tuned RoBERTa-base transformer model, enhanced with a dynamic
optimal thresholding strategy to maximize the F1-score on a severely imbalanced multilingual
Twitter dataset. It incorporates a custom preprocessing pipeline that normalizes noise and tokenizes
platform-specific features, such as URLs, mentions, and hashtags into dedicated tokens to preserve
contextual signals. Class imbalance is mitigated through weighted cross-entropy loss, while training
optimizations include gradient accumulation and a linear learning rate scheduler with early stopping.
priyam_saha17 [62] proposes a memory-eficient pipeline for misinformation detection that leverages
a frozen RoBERTa encoder to extract contextual embeddings, which are then processed through a
trainable projection head and a compact classifier. The methodology employs supervised contrastive
learning to enhance representational separation between classes, using dropout to generate stochastic
views for contrastive pairs without additional forward passes.
whiteby [67] introduces a hybrid deep learning framework for misinformation detection that integrates
semantic embeddings from ModernBERT with hand-crafted feature engineering from X (Twitter)
metadata. The model architecture fuses transformer-based text representations with engineered
features from text, user profiles, and social engagement, processed through feed-forward networks.
To mitigate severe class imbalance, the approach employs Focal Loss with strategic resampling and
optimizes classification thresholds via grid search on validation data.
sushma03 [68] presents a fine-tuned BERT-based model for detecting and classifying misinformation
in the 2022 Russo-Ukrainian conflict tweets . It employs transfer learning on a pre-trained BERT
architecture, fine-tuning it with a task-specific dataset augmented by fact-checked articles to address
class imbalance. The model utilizes a hybrid training approach with defined hyperparameters, including
a learning rate of 2e-5 and batch size of 10, achieving classification through weighted F1-score
evaluation. Also, the method incorporates external multilingual datasets to enhance cross-domain
generalization and improve detection accuracy in imbalanced data scenarios.
wangkongqiang [60] explores misinformation detection in LLM-generated and social media texts
using auxiliary text supervised learning. It employs logistic regression, dense neural networks, and
recurrent neural networks alongside the transformer-based DeBERTaV3 model. Enhanced through
decoupled attention and relative position encoding, DeBERTaV3 is adapted for multi-class and binary
classification across multiple languages. Results indicate that ensemble and pre-trained transformer
approaches yield competitive performance.
shakshi57 [69] employs RoBERTa-based transformer embeddings for feature extraction, utilising
TF-IDF vectorization for text representation within a highly imbalanced multilingual dataset. The
system integrates an interactive web dashboard for real-time misinformation classification, providing
confidence scores and performance visualizations. Evaluation shows superior weighted F1-score
performance over traditional baselines, with additional validation through cross-domain fact-checking
articles from PolitiFact and Boom Live.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>PROMID 2025 represents an important step toward a more comprehensive understanding of
misinformation and hallucinations in modern NLP systems, particularly in the context of prompt-driven
generation, cross-lingual summarisation, and real-world social media discourse. By introducing novel
tasks such as prompt recovery and fine-grained factual error classification for Indian languages, the
shared task expands the scope of misinformation research beyond output-only analysis and
Englishcentric benchmarks. The strong participation and diversity of submitted systems demonstrate growing
community interest in addressing these challenges, while also revealing significant open problems
related to ambiguity, multilingual robustness, and factual faithfulness. We hope that the datasets,
evaluation frameworks, and insights provided through PROMID will serve as a foundation for future
work on interpretable, reliable, and socially responsible language technologies and encourage further
exploration of mitigation techniques for hallucinations and misinformation in high-impact, multilingual
settings.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The authors have employed Generative AI tools for writing parts of this paper. However, all AI-generated
content was thoroughly reviewed and edited. The authors take full responsibility for the accuracy of
the publication’s content.
[60] K. Wang, P. Zhang, Q. Tan, Misinformation detection in social media texts and llm generated text
using auxiliary text supervised learning, in: K. Ghosh, T. Mandl, S. Pal, S. Majumdar, A. Chakraborty
(Eds.), Working Notes of FIRE 2025 - Forum for Information Retrieval Evaluation, Varanasi, India.</p>
      <p>December 17–20, 2025, CEUR Workshop Proceedings, CEUR-WS.org, 2025.
[61] R. Nagaraju, H. L. Shashirekha, From misrepresentation to quantities: Labeling
misinformation types in south indian language summaries, in: K. Ghosh, T. Mandl, S. Pal, S. Majumdar,
A. Chakraborty (Eds.), Working Notes of FIRE 2025 - Forum for Information Retrieval Evaluation,
Varanasi, India. December 17–20, 2025, CEUR Workshop Proceedings, CEUR-WS.org, 2025.
[62] P. Saha, A lightweight contrastive system for misinformation detection in social media tweets, in:
K. Ghosh, T. Mandl, S. Pal, S. Majumdar, A. Chakraborty (Eds.), Working Notes of FIRE 2025 - Forum
for Information Retrieval Evaluation, Varanasi, India. December 17–20, 2025, CEUR Workshop
Proceedings, CEUR-WS.org, 2025.
[63] T. Ehrhart, R. Troncy, G. Burel, H. Alani, Misinformation detection in russo-ukrainian conflict
tweets, in: K. Ghosh, T. Mandl, S. Pal, S. Majumdar, A. Chakraborty (Eds.), Working Notes of FIRE
2025 - Forum for Information Retrieval Evaluation, Varanasi, India. December 17–20, 2025, CEUR
Workshop Proceedings, CEUR-WS.org, 2025.
[64] A. Trivedi, C. Mallikarjuna, Misinformation detection in multilingual social media texts using
llm-based translation, augmentation, and deberta fine-tuning, in: K. Ghosh, T. Mandl, S. Pal,
S. Majumdar, A. Chakraborty (Eds.), Working Notes of FIRE 2025 - Forum for Information Retrieval
Evaluation, Varanasi, India. December 17–20, 2025, CEUR Workshop Proceedings, CEUR-WS.org,
2025.
[65] P. Priyanshu, Detecting 2022 russo–ukrainian conflict misinformation using a hybrid transformer
approach, in: K. Ghosh, T. Mandl, S. Pal, S. Majumdar, A. Chakraborty (Eds.), Working Notes of
FIRE 2025 - Forum for Information Retrieval Evaluation, Varanasi, India. December 17–20, 2025,
CEUR Workshop Proceedings, CEUR-WS.org, 2025.
[66] D. Sharma, Y. Sharma, Misinformation detection using ml, in: K. Ghosh, T. Mandl, S. Pal,
S. Majumdar, A. Chakraborty (Eds.), Working Notes of FIRE 2025 - Forum for Information Retrieval
Evaluation, Varanasi, India. December 17–20, 2025, CEUR Workshop Proceedings, CEUR-WS.org,
2025.
[67] J. Peng, Z. Lin, Z. Han, A social media misinformation detection model integrating semantic and
twitter features, in: K. Ghosh, T. Mandl, S. Pal, S. Majumdar, A. Chakraborty (Eds.), Working Notes
of FIRE 2025 - Forum for Information Retrieval Evaluation, Varanasi, India. December 17–20, 2025,
CEUR Workshop Proceedings, CEUR-WS.org, 2025.
[68] S. Kumari, Automated detection of misinformation on twitter during the 2022 russo–ukrainian
conflict, in: K. Ghosh, T. Mandl, S. Pal, S. Majumdar, A. Chakraborty (Eds.), Working Notes of
FIRE 2025 - Forum for Information Retrieval Evaluation, Varanasi, India. December 17–20, 2025,
CEUR Workshop Proceedings, CEUR-WS.org, 2025.
[69] K. S. Charan, U. Suman, R. Jain, J. Kaur, S. Sharma, Aurora: Automated understanding and
recognition of omnilingual misinformation artefacts, in: K. Ghosh, T. Mandl, S. Pal, S. Majumdar,
A. Chakraborty (Eds.), Working Notes of FIRE 2025 - Forum for Information Retrieval Evaluation,
Varanasi, India. December 17–20, 2025, CEUR Workshop Proceedings, CEUR-WS.org, 2025.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Gangopadhyay,</surname>
          </string-name>
          <article-title>Report on the FIRE 2020 evaluation initiative</article-title>
          ,
          <source>SIGIR Forum 55</source>
          (
          <year>2021</year>
          ) 3:
          <fpage>1</fpage>
          -
          <lpage>3</lpage>
          :
          <fpage>11</fpage>
          . URL: https://doi.org/10.1145/3476415.3476418. doi:
          <volume>10</volume>
          .1145/ 3476415.3476418.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Maynez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Narayan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bohnet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>McDonald</surname>
          </string-name>
          ,
          <article-title>On faithfulness and factuality in abstractive summarization</article-title>
          ,
          <source>in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C. K.</given-names>
            <surname>Cheung</surname>
          </string-name>
          ,
          <article-title>Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization</article-title>
          ,
          <source>in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W.</given-names>
            <surname>Kryściński</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>McCann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Socher</surname>
          </string-name>
          ,
          <article-title>Evaluating the factual consistency of abstractive text summarization</article-title>
          ,
          <source>in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yu</surname>
          </string-name>
          , W. Ma,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Qin</surname>
          </string-name>
          , T. Liu,
          <article-title>A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions</article-title>
          ,
          <source>ACM Transactions on Information Systems</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P.</given-names>
            <surname>Sahoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Meharia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chadha</surname>
          </string-name>
          ,
          <article-title>A comprehensive survey of hallucination in large language, image, video and audio foundation models</article-title>
          ,
          <source>in: Findings of the Association for Computational Linguistics: EMNLP</source>
          <year>2024</year>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Urlana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shrivastava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Haddow</surname>
          </string-name>
          ,
          <article-title>Pmindiasum: Multilingual and cross-lingual headline summarization for languages in india, in: Findings of the Association for Computational Linguistics: EMNLP</article-title>
          <year>2023</year>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <source>Indian language summarization at FIRE</source>
          <year>2023</year>
          , in: D.
          <string-name>
            <surname>Ganguly</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Majumdar</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Mitra</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gangopadhyay</surname>
          </string-name>
          , P. Majumder (Eds.),
          <source>Proceedings of the 15th Annual Meeting of the Forum for Information Retrieval Evaluation</source>
          ,
          <string-name>
            <surname>FIRE</surname>
          </string-name>
          <year>2023</year>
          , Panjim, India,
          <source>December 15-18</source>
          ,
          <year>2023</year>
          , ACM,
          <year>2023</year>
          , pp.
          <fpage>27</fpage>
          -
          <lpage>29</lpage>
          . URL: https://doi.org/10.1145/3632754.3634662. doi:
          <volume>10</volume>
          .1145/3632754.3634662.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mandlia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <article-title>Overview of the hasoc track at fire 2019: Hate speech and ofensive content identification in indo-european languages</article-title>
          ,
          <source>in: Proceedings of the 11th Forum for Information Retrieval Evaluation</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hegde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Shashirekha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <source>Indian language summarization at FIRE</source>
          <year>2024</year>
          , in: D.
          <string-name>
            <surname>Ganguly</surname>
            ,
            <given-names>D. K.</given-names>
          </string-name>
          <string-name>
            <surname>Sanyal</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Majumder</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Majumdar</surname>
          </string-name>
          , S. Gangopadhyay (Eds.),
          <source>Proceedings of the 16th Annual Meeting of the Forum for Information Retrieval Evaluation</source>
          ,
          <string-name>
            <surname>FIRE</surname>
          </string-name>
          <year>2024</year>
          , Gandhinagar, India,
          <source>December 12-15</source>
          ,
          <year>2024</year>
          , ACM,
          <year>2024</year>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>25</lpage>
          . URL: https://doi.org/10.1145/3734947.3735668. doi:
          <volume>10</volume>
          .1145/3734947.3735668.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Madhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ranasinghe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zampieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          ,
          <article-title>Overview of the HASOC subtrack at FIRE 2021: Hatespeech and ofensive content identification in english and indo-aryan languages</article-title>
          , in: P. Mehta,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2021 -
          <article-title>Forum for Information Retrieval Evaluation, Gandhinagar</article-title>
          , India,
          <source>December 13-17</source>
          ,
          <year>2021</year>
          , volume
          <volume>3159</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>19</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3159</volume>
          /
          <fpage>T1</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <source>Overview of the HASOC track at FIRE</source>
          <year>2020</year>
          :
          <article-title>Hate speech and ofensive content identification in indo-european languages</article-title>
          , in: P. Mehta,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2020 -
          <article-title>Forum for Information Retrieval Evaluation, Hyderabad</article-title>
          , India,
          <source>December 16-20</source>
          ,
          <year>2020</year>
          , volume
          <volume>2826</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>87</fpage>
          -
          <lpage>111</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2826</volume>
          /
          <fpage>T2</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <source>Overview of the HASOC track at FIRE</source>
          <year>2019</year>
          :
          <article-title>Hate speech and ofensive content identification in indo-european languages</article-title>
          , in: P. Mehta,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2019 -
          <article-title>Forum for Information Retrieval Evaluation, Kolkata</article-title>
          , India,
          <source>December 12-15</source>
          ,
          <year>2019</year>
          , volume
          <volume>2517</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>167</fpage>
          -
          <lpage>190</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2517</volume>
          /
          <fpage>T3</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>H.</given-names>
            <surname>Madhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <article-title>Detecting ofensive speech in conversational code-mixed dialogue on social media: A contextual dataset and benchmark experiments</article-title>
          ,
          <source>Expert Systems with Applications</source>
          (
          <year>2022</year>
          )
          <fpage>119342</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mandalia</surname>
          </string-name>
          ,
          <article-title>Detecting and visualizing hate speech in social media: A cyber watchdog for surveillance</article-title>
          ,
          <source>Expert Syst. Appl</source>
          .
          <volume>161</volume>
          (
          <year>2020</year>
          )
          <article-title>113725</article-title>
          . URL: https: //doi.org/10.1016/j.eswa.
          <year>2020</year>
          .
          <volume>113725</volume>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2020</year>
          .
          <volume>113725</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Subramanian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ponnusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Benhur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shanmugavadivel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ganesan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shanmugasundaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Priyadharshini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <article-title>Ofensive language detection in tamil youtube comments by adapters and cross-domain knowledge transfer</article-title>
          ,
          <source>Comput. Speech Lang</source>
          .
          <volume>76</volume>
          (
          <year>2022</year>
          )
          <article-title>101404</article-title>
          . URL: https://doi.org/10.1016/j.csl.
          <year>2022</year>
          .
          <volume>101404</volume>
          . doi:
          <volume>10</volume>
          .1016/j.csl.
          <year>2022</year>
          .
          <volume>101404</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Priyadharshini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Muralidaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Suryawanshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jose</surname>
          </string-name>
          , E. Sherly,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>McCrae</surname>
          </string-name>
          ,
          <article-title>Overview of the track on sentiment analysis for dravidian languages in code-mixed text</article-title>
          , in: P. Majumder,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gangopadhyay</surname>
          </string-name>
          , P. Mehta (Eds.), FIRE 2020:
          <article-title>Forum for Information Retrieval Evaluation, Hyderabad</article-title>
          , India,
          <source>December 16-20</source>
          ,
          <year>2020</year>
          , ACM,
          <year>2020</year>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>24</lpage>
          . URL: https: //doi.org/10.1145/3441501.3441515. doi:
          <volume>10</volume>
          .1145/3441501.3441515.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Chakravarthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Kumaresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sakuntharaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Madasamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Thavareesan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Premjith</surname>
          </string-name>
          , S. K,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Navaneethakrishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>McCrae</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <article-title>Overview of the hasoc-dravidiancodemix shared task on ofensive language detection in tamil and malayalam</article-title>
          , in: P. Mehta,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2021 -
          <article-title>Forum for Information Retrieval Evaluation, Gandhinagar</article-title>
          , India,
          <source>December 13-17</source>
          ,
          <year>2021</year>
          , volume
          <volume>3159</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>589</fpage>
          -
          <lpage>602</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3159</volume>
          /
          <fpage>T3</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Amjad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sidorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhila</surname>
          </string-name>
          ,
          <article-title>Data augmentation using machine translation for fake news detection in the urdu language</article-title>
          , in: N.
          <string-name>
            <surname>Calzolari</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Béchet</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Blache</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Choukri</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Cieri</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Declerck</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Goggi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Isahara</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Maegaard</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Mariani</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Mazo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Moreno</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Odijk</surname>
          </string-name>
          , S. Piperidis (Eds.),
          <source>Proceedings of The 12th Language Resources and Evaluation Conference</source>
          ,
          <string-name>
            <surname>LREC</surname>
          </string-name>
          <year>2020</year>
          , Marseille, France, May
          <volume>11</volume>
          -16,
          <year>2020</year>
          ,
          <string-name>
            <given-names>European</given-names>
            <surname>Language Resources Association</surname>
          </string-name>
          ,
          <year>2020</year>
          , pp.
          <fpage>2537</fpage>
          -
          <lpage>2542</lpage>
          . URL: https://aclanthology.org/
          <year>2020</year>
          .lrec-
          <volume>1</volume>
          .309/.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Amjad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sidorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Labunets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Butt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. I.</given-names>
            <surname>Amjad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vitman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Gelbukh</surname>
          </string-name>
          ,
          <article-title>Overview of abusive and threatening language detection in urdu at FIRE 2021</article-title>
          , in: P. Mehta,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2021 -
          <article-title>Forum for Information Retrieval Evaluation, Gandhinagar</article-title>
          , India,
          <source>December 13-17</source>
          ,
          <year>2021</year>
          , volume
          <volume>3159</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>744</fpage>
          -
          <lpage>762</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3159</volume>
          /
          <fpage>T4</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Amjad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ashraf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sidorov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zubiaga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Gelbukh</surname>
          </string-name>
          ,
          <article-title>Threatening language detection and target identification in urdu tweets</article-title>
          ,
          <source>IEEE Access 9</source>
          (
          <year>2021</year>
          )
          <fpage>128302</fpage>
          -
          <lpage>128313</lpage>
          . URL: https://doi.org/10.1109/ACCESS.
          <year>2021</year>
          .
          <volume>3112500</volume>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2021</year>
          .
          <volume>3112500</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Chitale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>AK</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Doddapaneni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Gumma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nawale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sujatha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Puduppully</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          , et al.,
          <article-title>Indictrans2: Towards high-quality and accessible machine translation models for all 22 scheduled indian languages</article-title>
          ,
          <source>arXiv preprint arXiv:2305.16307</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>S.</given-names>
            <surname>Banerjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chakma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Naskar</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Rosso</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Bandyopadhyay</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Choudhury, Overview of the mixed script information retrieval (MSIR) at FIRE-2016</article-title>
          , in: P. Majumder,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sankhavara</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          Ghosh (Eds.),
          <source>Working notes of FIRE 2016 - Forum for Information Retrieval Evaluation</source>
          , Kolkata, India, December 7-
          <issue>10</issue>
          ,
          <year>2016</year>
          , volume
          <volume>1737</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>94</fpage>
          -
          <lpage>99</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>1737</volume>
          /
          <fpage>T3</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sequiera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Choudhury</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Banerjee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Naskar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bandyopadhyay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Chittaranjan</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Chakma</surname>
          </string-name>
          ,
          <article-title>Overview of FIRE-2015 shared task on mixed script information retrieval</article-title>
          , in: P. Majumder,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Agrawal</surname>
          </string-name>
          , P. Mehta (Eds.),
          <source>Post Proceedings of the Workshops at the 7th Forum for Information Retrieval Evaluation</source>
          , Gandhinagar, India, December 4-
          <issue>6</issue>
          ,
          <year>2015</year>
          , volume
          <volume>1587</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>25</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>1587</volume>
          /
          <fpage>T2</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <article-title>Overview of the FIRE 2019 AILA track: Artificial intelligence for legal assistance</article-title>
          , in: P. Mehta,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2019 -
          <article-title>Forum for Information Retrieval Evaluation, Kolkata</article-title>
          , India,
          <source>December 12-15</source>
          ,
          <year>2019</year>
          , volume
          <volume>2517</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2517</volume>
          /
          <fpage>T1</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          , P. Majumder,
          <article-title>FIRE 2020 AILA track: Artificial intelligence for legal assistance</article-title>
          , in: P. Majumder,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gangopadhyay</surname>
          </string-name>
          , P. Mehta (Eds.), FIRE 2020:
          <article-title>Forum for Information Retrieval Evaluation, Hyderabad</article-title>
          , India,
          <source>December 16-20</source>
          ,
          <year>2020</year>
          , ACM,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>3</lpage>
          . URL: https://doi.org/10.1145/3441501.3441510. doi:
          <volume>10</volume>
          .1145/3441501.3441510.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>V.</given-names>
            <surname>Parikh</surname>
          </string-name>
          , U. Bhattacharya,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bandyopadhyay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          , P. Majumder,
          <string-name>
            <surname>AILA</surname>
          </string-name>
          <year>2021</year>
          :
          <article-title>Shared task on artificial intelligence for legal assistance</article-title>
          , in: D.
          <string-name>
            <surname>Ganguly</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gangopadhyay</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mitra</surname>
          </string-name>
          , P. Majumder (Eds.), FIRE 2021:
          <article-title>Forum for Information Retrieval Evaluation, Virtual Event</article-title>
          , India,
          <source>December 13 - 17</source>
          ,
          <year>2021</year>
          , ACM,
          <year>2021</year>
          , pp.
          <fpage>12</fpage>
          -
          <lpage>15</lpage>
          . URL: https://doi.org/10.1145/3503162.3506571. doi:
          <volume>10</volume>
          .1145/3503162.3506571.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>V.</given-names>
            <surname>Parikh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mathur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mittal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <article-title>Lawsum: A weakly supervised approach for indian legal document summarization</article-title>
          ,
          <source>CoRR abs/2110</source>
          .01188 (
          <year>2021</year>
          ). URL: https://arxiv.org/abs/ 2110.01188. arXiv:
          <volume>2110</volume>
          .
          <fpage>01188</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wyner</surname>
          </string-name>
          ,
          <article-title>Identification of rhetorical roles of sentences in indian legal judgments</article-title>
          ,
          <source>in: Legal Knowledge and Information Systems: JURIX</source>
          <year>2019</year>
          :
          <article-title>The Thirty-second Annual Conference</article-title>
          , volume
          <volume>322</volume>
          , IOS Press,
          <year>2019</year>
          , p.
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>S.</given-names>
            <surname>Parashar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mittal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <article-title>Casrank: A ranking algorithm for legal statute retrieval</article-title>
          ,
          <source>Multimedia Tools and Applications</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <article-title>Optimum parameter selection for K.L.D. based authorship attribution in gujarati</article-title>
          ,
          <source>in: Sixth International Joint Conference on Natural Language Processing, IJCNLP</source>
          <year>2013</year>
          , Nagoya, Japan,
          <source>October 14-18</source>
          ,
          <year>2013</year>
          , Asian Federation of Natural Language Processing / ACL,
          <year>2013</year>
          , pp.
          <fpage>1102</fpage>
          -
          <lpage>1106</lpage>
          . URL: https://aclanthology.org/I13-1155/.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          ,
          <article-title>Large scale quantitative analysis of three indo-aryan languages</article-title>
          ,
          <source>J. Quant. Linguistics</source>
          <volume>23</volume>
          (
          <year>2016</year>
          )
          <fpage>109</fpage>
          -
          <lpage>132</lpage>
          . URL: https://doi.org/10.1080/09296174.
          <year>2015</year>
          .
          <volume>1071151</volume>
          . doi:
          <volume>10</volume>
          .1080/ 09296174.
          <year>2015</year>
          .
          <volume>1071151</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Basu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <article-title>Overview of the fire 2018 track: Information retrieval from microblogs during disasters (irmidis)</article-title>
          ,
          <source>in: Proceedings of the 10th annual meeting of the Forum for Information Retrieval Evaluation</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>S.</given-names>
            <surname>Majumdar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bandyopadhyay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chattopadhyay</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. P. Das</surname>
            ,
            <given-names>P. D.</given-names>
          </string-name>
          <string-name>
            <surname>Clough</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Majumder</surname>
          </string-name>
          ,
          <article-title>Overview of the irse track at fire 2022: Information retrieval in software engineering, in: Forum for Information Retrieval Evaluation</article-title>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>S.</given-names>
            <surname>Majumdar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Paul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Paul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bandyopadhyay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chattopadhyay</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. P. Das</surname>
            ,
            <given-names>P. D.</given-names>
          </string-name>
          <string-name>
            <surname>Clough</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Majumder</surname>
          </string-name>
          ,
          <article-title>Generative ai for software metadata: Overview of the information retrieval in software engineering track at fire 2023</article-title>
          , arXiv preprint arXiv:
          <volume>2311</volume>
          .03374 (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>W.</given-names>
            <surname>Liferth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mooney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chow</surname>
          </string-name>
          , Llm prompt recovery, https://kaggle.com/competitions/ llm-prompt-recovery,
          <year>2024</year>
          . Kaggle.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>Dory: Deliberative prompt recovery for llm</article-title>
          ,
          <source>arXiv preprint arXiv:2405.20657</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>S.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Stylerec: A benchmark dataset for prompt recovery in writing style transformation</article-title>
          ,
          <source>in: 2024 IEEE International Conference on Big Data (BigData)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1678</fpage>
          -
          <lpage>1685</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>A.</given-names>
            <surname>Singhal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Law</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kassner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Damle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Multilingual fact-checking using llms</article-title>
          ,
          <source>in: Proceedings of the Third Workshop on NLP for Positive Impact</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Chikkala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Anikina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Skachkova</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Vykopal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Agerri</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. van Genabith</surname>
          </string-name>
          ,
          <article-title>Automatic fact-checking in english and telugu</article-title>
          ,
          <source>arXiv preprint arXiv:2509.26415</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>A.</given-names>
            <surname>Galassi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          , A. B.-C. no,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Caselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kutlu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Struss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Antici</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hasanain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Köhler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Korre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Leistra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Muti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Siegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Turkmen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wiegand</surname>
          </string-name>
          , W. Zaghouani,
          <article-title>Overview of the CLEF-2023 CheckThat! lab task 2 on subjectivity in news articles</article-title>
          , in: Working Notes of CLEF 2023-
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          , CLEF '
          <year>2023</year>
          , Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>G.</given-names>
            <surname>Da San Martino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hasanain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Nandi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Azizov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <article-title>Overview of the CLEF-2023 CheckThat! lab task 3 on political bias of news articles and news media</article-title>
          , in: Working Notes of CLEF 2023-
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          , CLEF '
          <year>2023</year>
          , Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          , G. Da San Martino, M. Hasanain,
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Nandi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Azizov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Panayotov</surname>
          </string-name>
          ,
          <article-title>Overview of the CLEF-2023 CheckThat! lab task 4 on factuality of reporting of news media</article-title>
          , in: Working Notes of CLEF 2023-
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          , CLEF '
          <year>2023</year>
          , Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Moro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gregor</surname>
          </string-name>
          , I. Srba,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ostermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Šimko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Podroužek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mesarčík</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kopčan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Søgaard</surname>
          </string-name>
          , Semeval
          <article-title>-2025 task 7: Multilingual and crosslingual fact-checked claim retrieval</article-title>
          ,
          <source>in: Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)</source>
          ,
          <year>2025</year>
          , pp.
          <fpage>2498</fpage>
          -
          <lpage>2511</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>R.</given-names>
            <surname>Vázquez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mickus</surname>
          </string-name>
          , E. Zosa,
          <string-name>
            <given-names>T.</given-names>
            <surname>Vahtola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tiedemann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Segonne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sánchez-Vega</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Raganato</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Libovickỳ</surname>
          </string-name>
          , et al.,
          <article-title>Semeval-2025 task 3: Mu-shroom, the multilingual shared task on hallucinations and related observable overgeneration mistakes</article-title>
          ,
          <source>in: Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [46]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Smucker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Clarke</surname>
          </string-name>
          ,
          <article-title>Overview of the trec 2024 lateral reading track (</article-title>
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [47]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          , P. Mehta,
          <article-title>FIRE 2022 ILSUM track: Indian language summarization</article-title>
          , in: D.
          <string-name>
            <surname>Ganguly</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gangopadhyay</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Mitra</surname>
          </string-name>
          , P. Majumder (Eds.),
          <source>Proceedings of the 14th Annual Meeting of the Forum for Information Retrieval Evaluation</source>
          ,
          <string-name>
            <surname>FIRE</surname>
          </string-name>
          <year>2022</year>
          , Kolkata, India, December 9-
          <issue>13</issue>
          ,
          <year>2022</year>
          , ACM,
          <year>2022</year>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>11</lpage>
          . URL: https://doi.org/10.1145/3574318.3574328. doi:
          <volume>10</volume>
          .1145/ 3574318.3574328.
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          [48]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <article-title>Findings of the first shared task on indian language summarization (ILSUM): approaches challenges and the path ahead</article-title>
          , in: K. Ghosh,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2022 -
          <article-title>Forum for Information Retrieval Evaluation, Kolkata</article-title>
          , India, December 9-
          <issue>13</issue>
          ,
          <year>2022</year>
          , volume
          <volume>3395</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>369</fpage>
          -
          <lpage>382</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3395</volume>
          /
          <fpage>T6</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          [49]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hegde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Shasirekha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          , G. Pasi, T. Mandl,
          <article-title>Prompt recovery for misinformation detection at fire 2025, in: Proceedings of the 17th Annual Meeting of the Forum for Information Retrieval Evaluation</article-title>
          , FIRE '25,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          [50]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <article-title>Key takeaways from the second shared task on indian language summarization (ILSUM 2023)</article-title>
          , in: K. Ghosh,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2023 -
          <article-title>Forum for Information Retrieval Evaluation (FIRE-WN</article-title>
          <year>2023</year>
          ), Goa, India,
          <source>December 15-18</source>
          ,
          <year>2023</year>
          , volume
          <volume>3681</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>724</fpage>
          -
          <lpage>733</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3681</volume>
          /
          <fpage>T8</fpage>
          -1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          [51]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <article-title>Overview of the third shared task on indian language summarization</article-title>
          (ilsum
          <year>2024</year>
          ), in: K. Ghosh,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Majumder</surname>
          </string-name>
          , M. Mitra (Eds.), Working Notes of FIRE 2024 -
          <article-title>Forum for Information Retrieval Evaluation (FIRE</article-title>
          <year>2024</year>
          ), Gandhinagar, India,
          <source>December 12-15</source>
          ,
          <year>2024</year>
          , CEUR Workshop Proceedings, CEUR-WS.org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          [52]
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <article-title>Fighting fire with fire: Adversarial prompting to generate a misinformation detection dataset</article-title>
          ,
          <source>CoRR abs/2401</source>
          .04481 (
          <year>2024</year>
          ). URL: https://doi.org/10. 48550/arXiv.2401.04481. doi:
          <volume>10</volume>
          .48550/ARXIV.2401.04481. arXiv:
          <volume>2401</volume>
          .
          <fpage>04481</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          [53]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mejova</surname>
          </string-name>
          ,
          <article-title>Too little, too late: Moderation of misinformation around the russoukrainian conflict</article-title>
          ,
          <source>in: Proceedings of the 17th ACM Web Science Conference</source>
          <year>2025</year>
          ,
          <year>2025</year>
          , pp.
          <fpage>379</fpage>
          -
          <lpage>390</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          [54]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Seneviratne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Spaniol</surname>
          </string-name>
          , Semcafe:
          <article-title>When named entities make the diferenceassessing web source reliability through entity-level analytics</article-title>
          ,
          <source>in: Proceedings of the 17th ACM Web Science Conference</source>
          <year>2025</year>
          ,
          <year>2025</year>
          , pp.
          <fpage>148</fpage>
          -
          <lpage>157</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          [55]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Majchrzak</surname>
          </string-name>
          ,
          <article-title>Amused: an annotation framework of multimodal social media data</article-title>
          ,
          <source>in: International Conference on Intelligent Technologies and Applications</source>
          , Springer,
          <year>2021</year>
          , pp.
          <fpage>287</fpage>
          -
          <lpage>299</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          [56]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dirkson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Majchrzak</surname>
          </string-name>
          ,
          <article-title>An exploratory study of covid-19 misinformation on twitter</article-title>
          ,
          <source>Online social networks and media 22</source>
          (
          <year>2021</year>
          )
          <fpage>100104</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          [57]
          <string-name>
            <surname>C.-Y. Lin</surname>
          </string-name>
          ,
          <article-title>Rouge: A package for automatic evaluation of summaries</article-title>
          , in: Text summarization branches out,
          <year>2004</year>
          , pp.
          <fpage>74</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          [58]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kishore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Q.</given-names>
            <surname>Weinberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Artzi</surname>
          </string-name>
          , Bertscore:
          <article-title>Evaluating text generation with bert</article-title>
          , arXiv preprint arXiv:
          <year>1904</year>
          .
          <volume>09675</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          [59]
          <string-name>
            <given-names>N. V.</given-names>
            <surname>Gokul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Joel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gautham</surname>
          </string-name>
          , J. Rajeswari, Indicbertv2
          <article-title>-mlm-only for fine-grained misinformation analysis in south indian languages</article-title>
          , in: K. Ghosh,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Majumdar</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Chakraborty (Eds.), Working Notes of FIRE 2025 -
          <article-title>Forum for Information Retrieval Evaluation, Varanasi, India</article-title>
          .
          <source>December 17-20</source>
          ,
          <year>2025</year>
          , CEUR Workshop Proceedings, CEUR-WS.org,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>