<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Misinformation Detection using ML</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Deepish Sharma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yashvardhan Sharma</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Birla Institute of Technology &amp; Science, BITS PILANI University</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>The rapid global dissemination of false information on social media platforms poses a serious threat to public debate, particularly during significant events like the Russo-Ukrainian conflict. Typical machine learning techniques are inefective because real-world social media data might be complex, noisy, and severely uneven in terms of class. In order to identify false information in an unbalanced, multilingual dataset of tweets on the war, our group, Deepish with id 429337, employed a refined RoBERTa-based Transformer model to identify false information in a multilingual Twitter dataset.Our method uses enhancements like a dynamic optimal thresholding strategy to maximize the F1 score on the validation set and balanced class weighting in the loss function to minimize imbalance. In order to normalize noise and platform-specific information like URLs, mentions, and hashtags as unique tokens, it also uses proprietary pre-processing. Our optimized classifier placed fourth in the identification procedure with a strong weighted F1 score of 0.88 using a held-out test set. . In a dificult real-world case where the misinformation class makes up only 1% of the data, this result shows the robustness and efectiveness of our approach. Future research on automated, high-performance disinformation identification using complex language models will have a strong basis based on our work.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Machine Learning</kwd>
        <kwd>Misinformation Detection</kwd>
        <kwd>Twitter dataset</kwd>
        <kwd>Roberta</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In this digital era, and with the advancement of GenAI, we are generated enormous amounts of
information. Determining the authenticity of this information is very dificult. Misinformation can
have significant social consequences and sometimes lead to violence. For example, during the 2016
US presidential election and the Russo-Ukrainian conflict, misleading information rapidly spread on
Twitter .</p>
      <p>
        In addition, researchers have discovered that false information spread faster. False news spreads
much more quickly and widely than accurate news, according to a groundbreaking study by Soroush
Vosoughi et. of MIT [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The study demonstrated that falsehoods are 70% more likely to be retweeted.
The unprecedented speed at which misinformation propagates renders human-led fact-checking eforts
perpetually reactive and necessitates an automated preemptive defense.
      </p>
      <p>
        Public discourse is seriously threatened by the extensive transmission of false information via social
media and other channels. A relatively new advancement in deep learning, transformers are particularly
good at comprehending the contextual relationships seen in text [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Researchers in this field are working to identify and classify information as either misinformation
or non-misinformation. Using artificial intelligence (AI), researchers have had success in this area.
However, traditional machine learning (ML) algorithms struggle to perform when faced with complex
real-world scenarios such as natural language processing tasks.</p>
      <p>LLMs have significantly improved our capacity to distinguish between accurate and false information.
Why are they efective for content analysis? They are particularly adept at seeing the obvious indicators
of dishonesty, such as odd textual patterns, biased language, and blatant contradictions. We were aware
of the obstacles that older approaches faced as well as the actual harm posed by false information. As a
result, we took a calculated risk and used this cutting-edge technology to successfully navigate and fix
those challenging problems. In our proposed method we overcome the limitation of previous studies in
this area.</p>
      <p>The contribution of our research is as follows:
• Robust Preprocessing and Novel Feature Engineering
• Advanced Training and Optimization Strategies
• F1-Score Focused Prediction and Decision Making</p>
    </sec>
    <sec id="sec-2">
      <title>2. State of the Art</title>
      <p>
        There are multiple work done in this domain which examined ways to detect false information using
machine learning and deep learning approaches. For instance, Ahmed et al. achieved a precision rate of
92% using TF-IDF features and traditional classifiers; SVM performed best among these classifiers [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        A recommendation was given to identify fresh rumor’s in the middle of breaking news. Word2Vec
was used with Long Short-Term Memory (LSTM) recurrent neural network based on word embedding.
Although it still need improvement, the model achieved an accuracy of 79.5% [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In a diferent study,
proponents proposed a hybrid approach based on an LSTM-CNN model to classify tweets as rumors
or factual information. This method has an exceptional accuracy score of 82% [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. However, LSTMs
frequently sufer from overfitting, especially when working with small datasets. Another work used
various CNN architectures with Bidirectional Long Short-Term Memory (BiLSTM) to detect rumors
using a hybrid approach [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. To help the model achieve the best level of accuracy, it was constructed
using a number of pre-trained embedding layers. In the field, models designed to identify false news by
examining the connection between article headlines and content were created using a hybrid strategy
that included CNN, LSTM, and BiLSTM [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The proposed models achieved a maximum accuracy of
71.2%.
      </p>
      <p>
        However, these studies generally rely on datasets that difer significantly from the one proposed in
this research. This research focuses on multilingual misinformation surrounding the Russo-Ukrainian
conflict using real-time retweet data [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This data set is collected using the AMUSED framework
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This dataset’s distinctiveness creates issues that earlier work did not clearly address. Our approach,
tailored for this specific dataset, demonstrates good performance in a complex and dynamic
misinformation setting with an F1-score of 88%. Our results illustrate the usefulness of the approach for our
intended application, even though a direct comparison with earlier results is hampered by diferences
in the datasets.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1 Description of the data set</title>
        <p>The classification model was trained and validated using a special data set based on false information
observed in the real world.</p>
        <p>Context and source: The collection is made up of correctly annotated tweets collected over the
ifrst year of the conflict between Russia and Ukraine. This source ofers high-stakes, contextually rich
content that examines the model’s power to categorize quickly moving narratives.</p>
        <p>Language and Multilingual Aspect: Given the global scope of the disagreement, it is anticipated
that the tweets will be in several languages. Multilingual preparatory steps in our workflow are essential
because LLMs can react to false content in diferent languages.</p>
        <p>Severe Class Imbalance: The most obvious aspect is the severe class disparity, which poses a
significant challenge to the model. This is the division as a whole:</p>
        <p>Misinformation (Label 1, minority class): 156 rows for testing and 364 rows for training.
Non-misinformation (label 0, majority class): 14,646 records for testing and 34,174 rows for training.</p>
        <p>As covered in Section 3.3, this severe imbalance requires the application of particular mitigating
strategies.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2 Data Pre-processing and Feature Engineering</title>
        <p>To maximize the signal extracted from the noisy social media text and address the model’s
inherent limitations regarding platform-specific features, a custom preprocessing pipeline was developed
(implemented in the preprocess_text function).</p>
        <p>1. Noise Normalization: All text is cleaned, including the normalization of repeated punctuation
(e.g., !!! to !) and the conversion of emojis to their text descriptions (de-tokenization) to retain
semantic value.
2. Social Media Feature Injection: Key structural elements of the tweets are converted into
dedicated special tokens before RoBERTa tokenization. This includes replacing URLs (http\S+) with
[URL], user mentions (@\w+) with [USER], and formatting hashtags (#(\w+)) with [HASHTAG]
tags. This process teaches the model that the presence of these elements is a relevant contextual
feature, rather than treating them as noise.
3. Language Robustness: An internal mechanism attempts language detection. While full
translation is simulated, the core function ensures that the model can handle diverse inputs, a
necessary step given the global source of the dataset.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3 RoBERTa-base Model Architecture and Fine-Tuning</title>
      </sec>
      <sec id="sec-3-4">
        <title>We employ the RoBERTa-base model instantiated with RobertaForSequenceClassification. This</title>
        <p>is an end-to-end architecture where the classification layer (a linear layer) is built directly atop the
pooled output of the final Transformer layer, making the entire model diferentiable and trainable.</p>
        <p>A. Model Initialization and Loss Function
• Base Model: We used roberta-base, a 125M parameter model, as the underlying architecture.
• Class Imbalance Mitigation: Due to the severe imbalance (Misinformation ≈ 1% of the
data), balanced class weighting was computed using sklearn.utils.class_weight and
integrated directly into the CrossEntropyLoss function. This ensures that misclassifying
the minority Misinformation class incurs a significantly higher penalty than misclassifying the
majority Non-Misinformation class.</p>
        <p>The loss function  for a binary classification problem with class weighting w  is defined as:
 = −</p>
        <p>1
∑︁ w log(ˆ)
=0
(1)</p>
      </sec>
      <sec id="sec-3-5">
        <title>B. Training Optimizations</title>
        <p>To handle the memory requirements of the transformer model and ensure eficient learning, several
key optimizations were implemented:
• Gradient Accumulation: Training utilizes a physical batch size of phys = 8 but applies Gradient
Accumulation over  = 8 steps. This simulates an efective batch size of ef = phys× = 64 ,
stabilizing gradient calculation and improving training convergence without requiring excessive
GPU memory.
• Learning Rate and Scheduler: The AdamW optimizer was initialized with a low learning rate
(1 × 10 −5 ) and weight decay (0.01). A linear learning rate scheduler with zero warmup steps
was used to gradually decrease the learning rate over the training process, promoting fine-tuning
stability.
• Early Stopping: The training process monitors the validation F1-score and employs an early
stopping patience of 2 epochs to prevent overfitting on the training data.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.4 Evaluation and Optimal Threshold Selection</title>
      </sec>
      <sec id="sec-3-7">
        <title>A. Dynamic Thresholding (Prediction Optimization)</title>
        <p>The final classification prediction is not determined by the standard probability threshold
a dynamic optimal threshold ( ) is calculated on the validation set:
1. The model generates probability scores (1) for all validation samples.
2. The P1 scores and true labels are used to construct the Precision-Recall Curve.
3. The threshold  that maximizes the F 1-score on the validation data is selected.
0.5. Instead,
The optimal threshold  is determined by maximizing the F1-score ( 1) calculated as:
Optimal  = arg max

︂(
2</p>
        <p>Precision( ) · Recall( )
· Precision( ) + Recall( )
︂)
This ensures that the final model is calibrated specifically for the task’s primary metric (F
1-score) and
accounts for the high-imbalance context.</p>
      </sec>
      <sec id="sec-3-8">
        <title>B. Final Prediction</title>
        <p>For the unlabeled test data, the predicted label (1 or 0) is determined by applying the optimized
Predicted Label =
{︃1 (Misinformation)</p>
        <p>if 1 &gt; 
0 (Non-Misinformation) if 1 ≤ 
Texts filtered out during preprocessing are automatically assigned a label of 0 (non-misinformation).
(2)
(3)
The performance metrics that the optimized RoBERTa classifier produced on the held-out test set are
threshold  :</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Result</title>
      <p>shown in this section.</p>
      <sec id="sec-4-1">
        <title>4.1. Performance Metrics</title>
        <p>The checkpoint with the highest F1-score on the validation data, as identified by the ideal threshold, was
used to evaluate the model on the held-out test set. The weighted F-score is used as the main indicator
of overall system quality because of the extreme class imbalance (misinformation makes up about 1%).</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Comparing the leaderboard</title>
        <p>
          The other teams taking part in the shared task [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] had their submissions compared to ours. Our
performance is contrasted with the top-ranked teams on the leaderboard in Table below. ClimateSense
with id 430584, the best-performing team, received a weighted F1 score of 0.91.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>(Section 3.2).</p>
      <p>Achieving a high weighted F1-score validates the necessity and efectiveness of the core optimizations</p>
      <sec id="sec-5-1">
        <title>5.1. Class Weighting and F1 Balance:</title>
        <p>The high F1 score indicates that the dynamic optimal threshold successfully located the optimal location
on the precision-recall curve. Instead of depending on the poor default of 0.5, this guaranties that
the model’s decision boundary is modified for balanced performance. This is necessary for practical
implementation when reducing false positives and false negatives is crucial.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Model Strength and Contextual Feature Learning</title>
        <p>The performance achieved is due to the inherent contextual power of the RoBERTa architecture, which
is enhanced by the special preprocessing pipeline (see Section 3.2). The model can leverage Social
Media Feature Injection (tokenizing URLs, users, and hashtags) to analyze platform-specific cues as
predictive language features. Misinformation propagators frequently use echo chambers (mentions)
and viral dispersion techniques (URLs). Because of explicit tokenization, RoBERTa was able to encode
these patterns more eficiently than ordinary clean-text tokenization.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Comparison with State of the Art (SOTA)</title>
        <p>This high-stakes, multilingual social media dataset is unusual, making a direct quantitative comparison
with the literature mentioned in Section 2 dificult. Nevertheless, the performance (F_1 = 0.88) shows
that the optimized RoBERTa classifier is very competitive. Previous research using simpler, more
balanced, and cleaner benchmark datasets (like FakeNewsNet or LIAR) frequently reported higher
accuracies (like in the mid-90% range). Our outcome, obtained on a clearly dificult, noisy, and very
unbalanced dataset, demonstrates the resilience of the improved transfer learning method in an actual
crisis situation.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>We have classified the tweets as either misinformation or non-misinformation.In this study we took
the dataset which is highly imbalanced. This dataset is a real world dataset which contain manually
annotated tweets collected using the twitter API. In this our proposed model have achieved a F1 score
of 0.88 . The F1-score was deliberately and inevitably selected as the primary criterion due to the
inherent class disparity. This implies that the model performs well in terms of recall and precision for
both groups. Furthermore, significant text pre-processing techniques were required to transform noisy,
unstructured social media data into a format suitable for high-performance machine learning. This can
act as a baseline for the new research in this domain. We have done the preprocessing of the text which
is a foundation of this research.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used DeepL and Quillbot in order to: Grammar and
spelling check. Further, the author(s) used Gemini-Banan for figures 1,2,3 and 4 in order to: Generate
images. After using these tool(s)/service(s), the author(s) reviewed and edited the content as needed
and take(s) full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vosoughi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Aral,</surname>
          </string-name>
          <article-title>The spread of true and false news online</article-title>
          , science
          <volume>359</volume>
          (
          <year>2018</year>
          )
          <fpage>1146</fpage>
          -
          <lpage>1151</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Raza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Abdulkadir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. A.</given-names>
            <surname>Abid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Albouq</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Alwadain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. U.</given-names>
            <surname>Rehman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Sumiea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Farhan</surname>
          </string-name>
          ,
          <article-title>Enhancing fake news detection with transformer-based deep learning: A multidisciplinary approach</article-title>
          ,
          <source>PLoS One</source>
          <volume>20</volume>
          (
          <year>2025</year>
          )
          <article-title>e0330954</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          , I. Traore,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saad</surname>
          </string-name>
          ,
          <article-title>Detection of online fake news using n-gram analysis and machine learning techniques</article-title>
          , in: International conference on intelligent, secure, and
          <article-title>dependable systems in distributed and cloud environments</article-title>
          , Springer,
          <year>2017</year>
          , pp.
          <fpage>127</fpage>
          -
          <lpage>138</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Alkhodair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Fung</surname>
          </string-name>
          , J. Liu,
          <article-title>Detecting breaking news rumors of emerging topics in social media</article-title>
          ,
          <source>Information Processing &amp; Management</source>
          <volume>57</volume>
          (
          <year>2020</year>
          )
          <fpage>102018</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ajao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bhowmik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zargari</surname>
          </string-name>
          ,
          <article-title>Fake news identification on twitter with hybrid cnn and rnn models</article-title>
          ,
          <source>in: Proceedings of the 9th international conference on social media and society</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>226</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. Z.</given-names>
            <surname>Asghar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Habib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Habib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khattak</surname>
          </string-name>
          ,
          <article-title>Exploring deep neural networks for rumor detection</article-title>
          ,
          <source>Journal of Ambient Intelligence and Humanized Computing</source>
          <volume>12</volume>
          (
          <year>2021</year>
          )
          <fpage>4315</fpage>
          -
          <lpage>4333</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Abedalla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Al-Sadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Abdullah</surname>
          </string-name>
          ,
          <article-title>A closer look at fake news detection: A deep learning perspective</article-title>
          ,
          <source>in: Proceedings of the 3rd International Conference on Advances in Artificial Intelligence</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>24</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mejova</surname>
          </string-name>
          ,
          <article-title>Too little, too late: Moderation of misinformation around the russoukrainian conflict</article-title>
          ,
          <source>Websci '25</source>
          ,
          <year>2025</year>
          . doi:
          <volume>10</volume>
          .1145/3717867.3717876.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hegde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Shasirekha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          , G. Pasi, T. Mandl,
          <article-title>Prompt recovery for misinformation detection at fire 2025, in: Proceedings of the 17th Annual Meeting of the Forum for Information Retrieval Evaluation</article-title>
          , FIRE '25,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Majchrzak</surname>
          </string-name>
          ,
          <string-name>
            <surname>Amused:</surname>
          </string-name>
          <article-title>An annotation framework of multimodal social media data</article-title>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G. K.</given-names>
            <surname>Shahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hegde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Satapara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mehta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Modha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganguly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Nandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. L.</given-names>
            <surname>Shasirekha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Jaiswal</surname>
          </string-name>
          , G. Pasi, T. Mandl,
          <article-title>Overview of the first shared task on prompt recovery for misinformation detection</article-title>
          (promid
          <year>2025</year>
          ), in: K. Ghosh,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mandl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Majumdar</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Chakraborty (Eds.), Working Notes of FIRE 2025 -
          <article-title>Forum for Information Retrieval Evaluation, Varanasi, India</article-title>
          .
          <source>December 17-20</source>
          ,
          <year>2025</year>
          , CEUR Workshop Proceedings, CEUR-WS.org,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>