<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Detection in Social Media Tweets</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Priyam Saha</string-name>
          <email>impriyamsaha@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Associate in Cyber Risk Advisory, Grant Thornton Advisors LLC</institution>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>A compact classification system was developed and submitted to Prompt RecOvery For MisInformation Detection (PROMID) Subtask 3 for the detection of misinformation in tweets about the Russia-Ukraine conflict on Twitter platform as provided by the workshop organisers. The proposed solution combines a frozen RoBERTa encoder, a small projection head trained with a supervised contrastive objective, and a lightweight classifier trained jointly with binary cross-entropy. Design choices were driven by compute and memory constraints; several practical implementation details and evaluation outcomes are reported to support reproducibility of results. The submission of predictions computed on the test dataset as provided by the organizers was made on the Codabench platform as team 'priyam_saha17' and submission id as 431064. On the oficial test set, the methodology produced a weighted F1 score of 0.82 (precision 0.87, recall 0.80), thereby securing the 5th rank in the track leaderboard, accessible at Link. For a comparison, the leaderboard was topped by team 'ClimateSense' who achieved a weighted F1 score of 0.91 (precision 0.91, recall 0.91). The approach, training pipeline and error analysis are documented in order to assist future participants and applied researchers working under limited resource conditions.</p>
      </abstract>
      <kwd-group>
        <kwd>misinformation detection</kwd>
        <kwd>contrastive learning</kwd>
        <kwd>RoBERTa</kwd>
        <kwd>social media</kwd>
        <kwd>PROMID</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Misinformation on social media has been recognized as a substantial challenge for public discourse
and policy. Automated detection systems were requested in PROMID Subtask 3 to classify tweets
related to the Russia–Ukraine conflict as</p>
      <p>misinformation or non-misinformation. This work documents
a memory-eficient pipeline that was designed to operate on a single 16GB Tesla P100 GPU by freezing
the transformer encoder and training compact head modules. The central design objective was to
maximize representational separation between labeled classes using supervised contrastive learning
while keeping the number of trainable parameters low on account of constrained compute resources.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related</title>
    </sec>
    <sec id="sec-3">
      <title>Work</title>
      <p>
        Contrastive representation learning has been widely adopted for visual and textual tasks due to its
efectiveness at structuring embedding spaces. Classical methods for self-supervised contrastive learning
were popularized by Chen et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which demonstrated the power of data augmentations and
largebatch contrastive losses. Supervised variants that exploit label information were later proposed by
Khosla et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], showing improved downstream classification performance when positive pairs are
formed from examples with the same label.
      </p>
      <p>
        Pretrained language encoders such as RoBERTa [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and BERT [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] have been extensively used for
classification tasks; their contextualized representations are commonly fine-tuned end-to-end for high
performance. Under compute constraints, however, head-only fine-tuning (freezing the encoder) is a
pragmatic alternative and has been used in applied settings to balance cost and accuracy as put forward
by Zhang et al. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Recent work has also shown that combining contrastive objectives with supervised
classification can increase robustness and separation in learned spaces [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
364 positives extracted from misinfo_train.csv
34,174 full negative pool before downsampling
728 positives + downsampled negatives
582 stratified split
146 stratified split</p>
      <sec id="sec-3-1">
        <title>2,414 final predictions were produced for these samples</title>
        <p>
          The Prompt Recovery for Misinformation Detection (PROMID) shared task has been introduced to
systematically study misinformation detection under prompt recovery and generalization settings [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
The task statement, subtasks, datasets and evaluation metrics are described in detail by the organisers,
providing a unified benchmark for multilingual and topic-focused misinformation detection in social
media [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>
          The PROMID Subtask 3 dataset collection was informed by a link-based annotation framework,
namely, AMUSED proposed by Shahi et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] and the subtask dataset has been shared by the organisers
[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Dataset and preprocessing</title>
      <p>The PROMID Subtask 3 dataset was provided by the organisers and consisted of manually annotated
tweets collected during the first year of the Russia–Ukraine war. Two labeled CSV files were supplied:
misinfo_train.csv (positive class) and nonmisinfo_train.csv (negative class). A held-out test CSV
without labels was provided for final predictions.</p>
      <p>A compact summary of data used in experiments is shown in Table 1. The negative class was heavily
over-represented in the original collection and was downsampled to form a balanced training set so that
contrastive positives and negatives were both balanced at a count of 364 during mini-batch training.
Empty or very short text entries were removed. Tokenization was performed using the RoBERTa
tokenizer with truncation to a maximum length of 512 tokens.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Methodology</title>
      <p>The pipeline was intentionally simple and reproducible. The core modules are:
1. Encoder. A pretrained roberta-base model was used to extract contextual token embeddings.</p>
      <p>The encoder parameters were frozen during head-only training to reduce memory consumption
and runtime.
2. Pooling. Mean pooling across token embeddings (masked by the attention mask) was used to
obtain a single vector representation per tweet from the token level embeddings obtained from
encoder.
3. Projection head. A small feedforward projection head (Dense → Dropout → Dense →
LayerNorm) was trained with stochastic dropout active during training so that calling the projection
head twice produced two stochastic views of the same example.
4. Classifier head. A compact classifier (Dense(256, gelu) → Dropout → Dense(1, sigmoid)) was
trained jointly to produce final binary predictions.</p>
      <p>The training objective combined a supervised contrastive loss and a binary cross-entropy loss so
that the representation space was encouraged to bring same-label examples closer while the classifier
learned decision boundaries on the pooled vectors.</p>
      <sec id="sec-5-1">
        <title>4.1. Contrastive learning: conceptual and mathematical description</title>
        <p>the 2 projections.</p>
        <p>Contrastive learning aims to structure the representation space such that similar (positive) pairs are
close while dissimilar (negative) pairs are separated. In supervised contrastive learning, the class labels
are used to generate positive pairs for each anchor.</p>
        <p>Given a minibatch of  examples, two stochastic views of each example were produced through
dropout in the projection head, resulting in 2 projections {z }2= 1 (each z is ℓ2-normalized). Let  
denote the integer label for the example corresponding to projection z ; labels are duplicated to match</p>
        <p>The supervised contrastive loss used in this work is defined per anchor  as:
hyperparameter. The final contrastive loss is averaged across anchors:
where  () = { ∶ 
 =   ,  ≠ }
is the set of positive indices for anchor  , and  &gt; 0 is a temperature
(1)
(2)
(3)</p>
        <p>
          The supervised contrastive formulation encourages clusters corresponding to the same label while
using all other examples in the batch as implicit negatives, improving utilization of batch information
compared to pairwise binary losses. For reference, early self-supervised instantiations such as SimCLR
[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] used two augmented views of the same instance and an InfoNCE loss; the supervised extension is
discussed extensively in [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Combined loss and optimization</title>
        <p>The classifier produced a scalar probability  ∈̂ (0, 1) . The binary cross-entropy loss was computed in
the usual way:</p>
        <p>The final training objective was a weighted sum:</p>
        <p>1
 =1
 BCE = −</p>
        <p>∑ [  log  ̂ + (1 −   ) log(1 −  ̂ )] .
 =</p>
        <p>contrastive +  BCE,
ℓ = −</p>
        <p>1
| ()|</p>
        <p>∑ log
∈()</p>
        <p>exp(z⊤z / )
∑≠ exp(z⊤z</p>
        <p>/ )
 contrastive =
1
2 =1
2
∑ ℓ .
with  =  = 1.0
activated).</p>
        <p>selected after light tuning. Gradient updates were applied only to the projection
and classifier head parameters (unless an experimental unfreeze of top encoder layers was explicitly
The overall system architecture is summarised in the flowchart shown in Figure
1, which
illustrates the preprocessing steps, the frozen encoder, the contrastive projection pathway, and the
classifier used to produce final predictions.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Implementation details</title>
      <p>The system was implemented in TensorFlow 2.x using HuggingFace models. Key configuration choices
were:
• Model: roberta-base (encoder), projection dim = 64.
• Input: Tokenization via RoBERTa tokenizer, max length = 512, truncation enabled.
• Batching: batch size = 16, training with shufling and prefetch for tf.data pipelines.
• Optimizer: Adam with initial learning rate 3 × 10−4 on head parameters.
• Regularization: Dropout in heads and gradient clipping (global norm = 1.0).
• Training policy: Early stopping after 6 epochs with no validation F1 improvement; LR reduced
by factor 0.5 after 3 non-improving epochs.
• Compute: Encoder parameters were frozen to limit memory; A standard Kaggle notebook
was utilized for fine-tuning the model, leveraging a Tesla P100 GPU with 16 GB of memory, a
maximum of 29 GB of RAM, and up to 57.6 GB of disk space.</p>
      <p>During training, the projection head was invoked twice per batch with training=True (dropout active)
to generate the two stochastic views without performing two encoder forward passes. This choice
was made to minimize memory and computation while still obtaining the stochasticity required for
contrastive training.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Experiments and results</title>
      <sec id="sec-7-1">
        <title>6.1. Validation dynamics</title>
        <p>Training was monitored with average losses (contrastive and classifier) and validation precision/recall/F1.
The contrastive loss decreased rapidly in early epochs as the projection head learned to structure the
embedding space; classifier loss dominated later epochs indicating head-level weight adjustments for the
classification boundary. Learning rate reductions and early stopping were used to prevent overfitting
on the small balanced training set.</p>
      </sec>
      <sec id="sec-7-2">
        <title>6.2. Final evaluation</title>
        <p>Final predictions were produced on the provided test CSV and scored by the organizers’ program. The
oficial classification report returned the following per-class and aggregated metrics:
• nonmisinfo: precision = 0.96, recall = 0.80, F1 = 0.87, support = 2002.
• misinfo: precision = 0.4567, recall = 0.8285, F1 = 0.59, support = 414.</p>
        <p>• weighted average F1: 0.8213.</p>
        <p>The submission was made by the name of priyam_saha17 (ID 431064) on the PROMID leaderboard
for the hackathon hosted by PROMID Subtask 3 organisers on CodaBench.</p>
        <p>The three panels in Figure 2 summarize the optimization trajectory. The learning rate remains
lfat until the scheduler triggers two stages of reductions. Contrastive loss converges rapidly as the
projection space stabilises, while classifier loss declines more gradually as the boundary is refined. The
validation curves reflect the model’s high recall behavior throughout training and improving F1 until
early stopping.</p>
        <p>(a) Learning rate schedule
(b) Training losses
(c) Validation metrics</p>
      </sec>
      <sec id="sec-7-3">
        <title>6.3. Ablation study</title>
        <p>in Table 2.
 =  = 1.0
training dataset. The full model jointly optimizes both objectives with equal weights ( =  = 1.0 ).
Two reduced variants were evaluated: (i) a classifier-only configuration where the contrastive term was
removed ( = 0 ), and (ii) a contrastive-only configuration where the classifier loss was removed (  = 0 )</p>
        <p>The results indicate that the joint optimization of contrastive representation learning and supervised
classification is necessary to achieve a balanced precision–recall trade-of and best F1 scores under
constrained compute settings. Further, it is observed that weights of  =  = 1.0
yields a higher F1
score on validation data rather than an averaged approach ( =  = 0.5 ). Hence, the configuration
was adopted for prediction on test data.</p>
        <p>Ablation study showing the impact of individual loss components.  denotes the weight of the contrastive loss
and  the weight of the classifier (binary cross-entropy) loss.</p>
        <sec id="sec-7-3-1">
          <title>Configuration</title>
        </sec>
        <sec id="sec-7-3-2">
          <title>Classifier only</title>
        </sec>
        <sec id="sec-7-3-3">
          <title>Contrastive only</title>
        </sec>
        <sec id="sec-7-3-4">
          <title>Contrastive + Classifier Contrastive + Classifier</title>
          <p>0.0
1.0
0.5
1.0

1.0
0.0
0.5
1.0</p>
        </sec>
        <sec id="sec-7-3-5">
          <title>Validation F1 Validation Recall Validation Precision 0.6667 0.0941</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>7. Error analysis and discussion</title>
      <p>The system exhibited a high recall for the misinformation class but a comparatively low precision,
indicating a tendency to over-predict the positive class. Manual inspection of false positives revealed
common patterns:
• Tweets that quoted or criticised a claim were sometimes classified as endorsing it because the
local text included keywords associated with misinformation; the model lacked explicit modeling
of quotation or negation scope.</p>
      <p>misclassified due to missing contextual signals.</p>
      <p>• Very short tweets, or tweets consisting primarily of a URL or an image reference, were often
Some practical remediation strategies are suggested as follows for future considerations:
• Enriching tweet inputs with surrounding context (linked article title or claim summary) when
available.
• If resources permit, unfreezing the encoder or at least top transformer layers for a small number
of epochs to allow encoder to learn features corresponding to this domain.</p>
    </sec>
    <sec id="sec-9">
      <title>8. Limitations</title>
    </sec>
    <sec id="sec-10">
      <title>9. Conclusion</title>
      <p>The principal limitations are the reliance on a relatively small balanced training set obtained by
downsampling and the use of a frozen encoder which limits representation adaptation. The resulting
trade-of was computational feasibility vs. maximum attainable performance. The reported system
therefore should be interpreted as a strong baseline for low-resource scenarios rather than a final
state-of-the-art submission.</p>
      <p>A lightweight supervised contrastive plus classifier system was described and evaluated for PROMID
Task 3. The system produced a weighted F1 of 0.8213 on the provided test data. The design choices
prioritized memory eficiency and reproducibility: encoder freezing, stochastic projection views via dropout,
and a joint contrastive/BCE objective. The results indicate that contrastive separation helps achieve
high recall for the misinformation class under constrained resources, and that further improvements
are likely if complete or selective unfreezing is introduced.</p>
    </sec>
    <sec id="sec-11">
      <title>Acknowledgments</title>
      <p>The PROMID organizing committee is gratefully acknowledged for the dataset and the scoring
infrastructure. The AMUSED framework and the dataset references provided by the organizers were used as
background during the model pipeline design.</p>
    </sec>
    <sec id="sec-12">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used GPT-5.1 for organization and better polishing of
the phrases and language used in the article and Grammarly for grammar and spelling check. After
using these services, the author reviewed and edited the content as needed and takes full responsibility
for the publication’s content.</p>
    </sec>
    <sec id="sec-13">
      <title>A. Reproducibility checklist</title>
      <p>• Code: training script uses TensorFlow 2.x and HuggingFace TFRobertaModel.
• Model: roberta-base, projection dim = 64, classifier head as described.
• Hyperparameters: MAX_LEN=512, BATCH_SIZE=16, LR=3e-4, gradient clipping norm=1.0.
• Loss weights:  =  = 1.0 .</p>
      <p>• Training: early stopping patience = 6, LR reduce factor = 0.5 after 3 non-improving epochs.</p>
    </sec>
    <sec id="sec-14">
      <title>B. Code and notebook</title>
      <p>The training notebook script, prediction CSVs, plots used in the article and the public Kaggle notebook
carrying all fine-tuned model artifacts can be found over GitHub 1.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kornblith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Norouzi</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Hinton: A Simple Framework for Contrastive Learning of Visual Representations</article-title>
          .
          <source>In International Conference on Machine Learning (ICML)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Khosla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Teterwak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Isola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maschinot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Wu: Supervised Contrastive Learning</article-title>
          .
          <source>Advances in Neural Information Processing Systems (NeurIPS)</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          <article-title>Stoyanov: RoBERTa: A Robustly Optimized BERT Pretraining Approach</article-title>
          . arXiv:
          <year>1907</year>
          .11692,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          . BERT:
          <article-title>Pre-training of Deep Bidirectional Transformers for Language Understanding: In NAACL-</article-title>
          HLT,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Gunel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Afouras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Baş</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Zisserman: Supervised Contrastive Learning for Limited Labels</article-title>
          .
          <source>ICLR Workshop</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. A.</surname>
          </string-name>
          <article-title>Majchrzak: AMUSED: An Annotation Framework of Multi-modal Social Media Data</article-title>
          .
          <source>Technical report / preprint</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mejova: Too Little</surname>
          </string-name>
          , Too Late:
          <article-title>Moderation of Misinformation around the RussoUkrainian Conflict</article-title>
          . Websci '
          <volume>25</volume>
          ,
          <year>2025</year>
          . DOI:
          <volume>10</volume>
          .1145/3717867.3717876.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Smith:</surname>
          </string-name>
          Head-Only
          <string-name>
            <surname>Fine-Tuning</surname>
          </string-name>
          :
          <article-title>A Practical Approach for Low-Resource Adaptation</article-title>
          .
          <source>Workshop Report</source>
          ,
          <year>2021</year>
          .
          <article-title>(Discussion of head-only strategies</article-title>
          .)
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hegde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Shasirekha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          , G. Pasi,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <article-title>Mandl: Overview of the First Shared Task on Prompt Recovery for Misinformation Detection (PROMID</article-title>
          <year>2025</year>
          ).
          <source>Working Notes of FIRE</source>
          <year>2025</year>
          , CEUR Workshop Proceedings,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hegde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Shasirekha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          , G. Pasi, T. Mandl:
          <article-title>Prompt Recovery for Misinformation Detection at FIRE 2025</article-title>
          .
          <article-title>Proceedings of the Forum for Information Retrieval Evaluation (FIRE)</article-title>
          ,
          <source>Association for Computing Machinery</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>