<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>HIJLI-JU-CLEF at MULTI-Fake-DetectiVE: Multimodal Fake News Detection Using Deep Learning Approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sandip Sarkar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nripen Tudu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dipankar Das</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Application, Hijli College</institution>
          ,
          <addr-line>Kharagpur</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science and Engineering, Jadavpur University</institution>
          ,
          <addr-line>Kolkata</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>This report presents the progress made in developing a system for participation in a shared task called MULTI-Fake-DetectiVE Task 1, which aims to detect and verify fake news in a multimodal environment comprising both textual and visual elements. The task primarily revolves around automatically identifying fake news by analyzing the combined use of text and images. Our system is structured into three distinct modules. The initial module is responsible for extracting textual information from images. The second module serves as a translation component, enabling the analysis of non-English text by converting it into English. Lastly, the classification module utilizes the outputs from the previous modules to predict the appropriate classes, allowing for accurate diferentiation between various types of content. Our objective is to create a system that can predict these labels. To achieve this, we extract information from both the image and the text, specifically focusing on English language text and translating any textual data into English. Subsequently, both sets of data are utilized to train a classification-based model, which aims to predict the aforementioned labels. We were able to obtain a Weighted Average F1-Score of 0.393 by implementing a Multi-head attention mechanism.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Fake News</kwd>
        <kwd>Fake news detection</kwd>
        <kwd>Multi-modality</kwd>
        <kwd>Vision-Language models</kwd>
        <kwd>Large Language Models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>rapidly circulate among like-minded individuals through
forwarded messages and posts. These techniques
faciliOver the past few years, there has been a substantial in- tate the swift dissemination of misinformation, further
crease in Internet and social media usage. Unfortunately, perpetuating false narratives.
this growth has been accompanied by a significant rise in The combination of natural language processing (NLP)
the dissemination of fake news and misinformation. As a and computer vision is mutually beneficial when it comes
result, the ability to share information has become more to detecting fake news and analyzing textual and visual
accessible, no longer limited to prominent news organi- content. NLP focuses on examining the language used in
zations. Visual content, such as images, holds greater news articles and social media posts to detect patterns
prominence on social media platforms due to its intuitive and inconsistencies. Identify and highlight untrue
asnature. The intentional dissemination of fake news aims sertions, evaluate the reliability of sources, and verify
to harm the reputation of individuals or organizations. information using reputable sources. Computer vision
It can serve as a propagandistic tool targeting political analyzes images and videos, detecting manipulated
imparties or specific communities. Fake news proliferates ages by examining content, metadata, and contextual
efortlessly across various platforms such as social media, information. Detect indications of manipulation,
deeponline news platforms, blogs, messaging applications, fakes, or deceptive depictions. Combined, these methods
and group conversations. To enhance its credibility and provide a comprehensive approach to counteracting fake
facilitate its dissemination on social media and other news.
online channels, manipulated images and videos are fre- Our paper exhibits a clear and logical organization,
quently employed within fake news content. Certain ensuring that the content is presented in a cohesive
manwebsites and blogs employ deceitful designs and decep- ner. In Section 2, we give a summary of related work
tive domain names to mimic legitimate news sources. means what other researchers have found in this area.
Within closed groups and messaging apps, fake news can Then, in Section 3, we provide a detailed explanation of
MULTI-Fake-DetectiVE Task 1, using ideas and
methEVALITA 2023: 8th Evaluation Campaign of Natural Language Pro- ods from previous research. We also look at the dataset
*ceCsosirnrgesapnodnSdpinegecahuTthooolrs. for Italian, Sep 7 – 8, Parma, IT for MULTI-Fake-DetectiVE Task 1 and share some
inter$ sandipsarkar.ju@gmail.com (S. Sarkar); esting numbers and facts in Section 4. In the next part,
nripentudu010@gmail.com (N. Tudu); which is Section 5, we go into detail about our model.
Dipankar.dipnil2005@gmail.com (D. Das) Then, in Section 6, we describe our results. Finally, we
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License wrap up our paper in Section 7 with our conclusions.
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g ACttEribUutRion W4.0oInrtekrnsahtioonpal (PCCroBYce4.0e).dings (CEUR-WS.org)</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
    </sec>
    <sec id="sec-3">
      <title>3. Task Description</title>
      <p>
        Social media serves as a double-edged sword for news
consumption. While it ofers easy access to information
at a low cost, it also enables the spread of fake news
with intentionally false information [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Because there
has been an increase in the use of text combined with
images on social media, numerous studies have been
conducted to examine how the combination of visual
content and text can be used to anticipate the spread of
false information.
      </p>
      <p>
        In this research, the focus is on examining how the
fusion of textual and visual content can be used to forecast 4. Dataset Description
deceptive information and false news. The study presents
a multi-modal method that examines both text and im- The dataset used for this project consists of a wide range
ages to detect patterns and discrepancies that suggest the of social media posts and news articles that include both
presence of fake news [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. text and images. These posts and articles are closely
      </p>
      <p>
        Nguyen and Kyumin gathered and examined a group connected to real-world events that are often targeted
of online users known as guardians, who play a role in for spreading false information. Specifically, the dataset
rectifying misinformation and fake news within online focuses on the Ukrainian-Russian conflict that began in
discussions by referencing fact-checking URLs. They February 2022.
introduced an innovative model for recommending fact- The provided download script was used to obtain the
checking URLs, aiming to motivate guardians to actively actual data. Table 1 shows the description of the dataset.
participate in fact-checking endeavors [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This script creates a TSV (Tab-Separated Values) file and
      </p>
      <p>
        The suggested framework employs the explicit convo- a directory named "Media" in the current working
direclution neural network model for image processing and tory. The "Media" directory contains the downloaded
the sentence transformer for text analysis. The features images. The TSV includes the following:
extracted from both visual and textual inputs are then
fed through dense layers and subsequently merged to
predict the authenticity of images [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
• Certainly Fake: news that is most certain to be
      </p>
      <p>fake, whatever the context.
• Probably Fake: news that is still likely to be fake,
but may include some real information or at the
very least be somewhat credible.
• Probably Real: news that is very credible but
still retains some degree of uncertainty about the
provided information.
• Certainly Real: news that is most certain to be
real and incontestable, whatever the context.</p>
      <sec id="sec-3-1">
        <title>I. ID: A unique identifier for the data point.</title>
        <p>II. URL: The web address (URL) of the data point.</p>
        <p>III. Date: The date when the data point was created.</p>
        <p>Please note that newspaper articles may not have
this information available.</p>
        <p>IV. Type: Indicates whether the data point is a news</p>
        <p>article or a tweet.</p>
        <p>V. Text: The complete text of the data point.</p>
        <p>VI. Media: The names of image files associated with</p>
        <p>the data point, if any.</p>
        <p>VII. Label: A numerical label that represents one of
the four possible labels (i.e. Certainly Fake: 0,
Probably Fake: 1, Probably Real: 2, Certainly Real:
3) specified in the task description.
diferent languages. The third module is dedicated to tracting features from the visual input, such as an
imtext classification and utilizes a combination of a BiLSTM age. It typically consists of convolutional neural network
(Bidirectional Long Short-Term Memory) model and a (CNN) layers that can capture spatial and visual
inforMulti-head attention model to classify text into diferent mation from the input image. The encoder encodes the
categories. image into a fixed-length vector representation, often
referred to as "image embedding" or "visual features."
5.1. Image to Text generation Module The decoder part of the model takes the image
embedding as input and generates textual output, such as
Image captioning is the process of generating a caption captions or descriptions, related to the visual content.
i.e., a description from the input image. It requires both The decoder is typically implemented using recurrent
Natural language processing as well as computer vision to neural networks (RNNs), such as long short-term
memgenerate the caption. Image captioning typically involves ory (LSTM) or gated recurrent units (GRUs), which can
a deep learning approach, where a neural network model process sequential data and generate coherent textual
is trained on a large dataset of paired images and their output.
corresponding captions. The model was initialized with an image-to-text model</p>
        <p>This model learns to extract visual features from the with a pre-trained Transformer-based vision which
inimages and then generates captions based on these visual volves dividing the input image into patches and treating
features. The architecture of the Image to Text Genera- them as tokens. These patches undergo self-attention and
tion Model is illustrated in Figure 1. feed-forward networks within the Transformer encoder.</p>
        <p>Image captioning is an example, in which the encoder Self-attention captures relationships between patches,
model is used to encode the image, after which an au- while the feed-forward networks refine the
representatoregressive language model i.e., the decoder model gen- tions. Positional embeddings capture spatial information,
erates the caption. The model we use to generate text and a classification head predicts the final output. and a
from images is developed and trained by Ankur 2. It is a pre-trained language model as the decoder GPT2
(GenerVision Encoder-Decoder Model, which is a type of neural ative Pre-trained Transformer 2) model operates through
network model used for tasks that involve processing a mechanism called the Transformer architecture. It
conboth visual and textual information. It combines an en- sists of multiple layers of self-attention and feed-forward
coder network, which processes the visual input, with a neural networks. The model takes a sequence of tokens
decoder network, which generates textual output. 3 as input and processes them iteratively, attending to
relThe encoder part of the model is responsible for ex- evant tokens and generating context-aware
representations. The self-attention mechanism captures
relationships between diferent tokens in the sequence, while
the feed-forward networks apply non-linear
transforma2https://huggingface.co/nlpconnect/vit-gpt2-image-captioning
3https://ankur3107.github.io/assets/images/vision-encoderdecoder.png
ceding and succeeding elements is crucial for precise
predictions or classifications.
5.3.2. Multi-head Attention Mechanism
tions.</p>
        <p>
          The Vision Encoder-Decoder Model was trained using
a dataset Common Objects in Context (COCO), a
collection of more than 120 thousand images with descriptions
[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. The model is optimized to minimize the discrepancy
between the predicted captions and the ground truth
captions in the training data.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>The Multi-head Attention Mechanism model is a com</title>
        <p>ponent used in transformer-based neural networks that
allows the model to attend to diferent parts of the
input simultaneously and capture diverse relationships. It
5.2. Link module enhances the model’s ability to process and extract
inforThe Link module establishes a connection between Mod- mation from the input sequence efectively.
ule 1 and Module 2. Since the data given by the organizer In this mechanism, the input sequence is transformed
is in Italian, we need to convert it into a universal lan- into multiple query, key, and value representations
guage, such as English. Subsequently, in the next module, through linear projections. These projections are then
we employ various classification methods. used to compute attention scores between diferent
positions in the sequence. The attention scores determine the
importance or relevance of each position with respect to
5.3. Text Classification Module
others.</p>
        <p>The text classification module is a vital component of Multiple attention heads are employed in parallel, each
the multi-modal fake news detection system as it plays attending to a diferent set of positions in the sequence.
a pivotal role in analyzing the textual content and accu- This enables the model to capture various patterns and
rately categorizing it into one of the predefined labels: dependencies at diferent levels of granularity. The
out"Certainly Fake," "Probably Fake," "Probably Real," or "Cer- put of each attention head is combined to form the final
tainly Real". output, which incorporates information from multiple</p>
        <p>To perform text classification, we have employed two perspectives.
distinct models: 1) Bi-LSTM and 2) Multi-head attention By utilizing the multi-head attention mechanism, the
model. Further information about these models can be model can capture both local and global dependencies
found in the subsequent sections. in the input sequence. It enhances the model’s ability to
model long-range dependencies, improve performance
5.3.1. BiLSTM on complex tasks such as machine translation, text
generation, and image captioning, and allows for eficient
parallel computation during training and inference.</p>
      </sec>
      <sec id="sec-3-3">
        <title>BiLSTM, which stands for Bidirectional Long Short-Term</title>
        <p>
          Memory, is a recurrent neural network (RNN) model that
takes into account information from both previous and
future contexts when analyzing sequential data. Unlike
traditional LSTM, which processes sequences in only
one direction, BiLSTM processes the sequence in both
forward and backward directions concurrently. It finds
applications across various domains in the field of NLP,
such as text simplification, machine translation, text
similarity, and numerous other areas [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>In the BiLSTM model, the input sequence is passed
through two distinct LSTM layers: one layer handles the
sequence in a forward direction, while the other layer
handles it in a backward direction. This enables the model
to grasp relationships and contextual information from
both preceding and succeeding elements within the
sequence.</p>
        <p>Through the amalgamation of representations from
both directions, BiLSTM adeptly captures a more holistic
comprehension of the sequential data. This architecture
is widely employed in various tasks, including natural
language processing, speech recognition, and sentiment
analysis, where incorporating context from both
pre</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>6. Results</title>
      <p>Table 2 presents the rankings and performance of
different teams in MULTI-Fake-DetectiVE Task 1, based
on their weighted average F1 scores. The higher the F1
score, the better the performance of the team in that task.
The "Weighted Avg. F1-Score" column displays the
corresponding F1-score achieved by each team. The F1-score
is a measure of a model’s accuracy, combining precision
and recall, and the weighted average takes into account
the relative importance of each class in the evaluation.</p>
    </sec>
    <sec id="sec-5">
      <title>7. Conclusion</title>
      <p>In conclusion, the field of multi-modal fake news
detection has witnessed significant research eforts aimed at
detecting and verifying fake news in environments that
involve both textual and visual elements. The studies
mentioned above highlight the importance of considering
multiple modalities, such as text and images, to enhance</p>
      <p>Polito-P1
extremITA-camoscio_lora
AIMH-MYPRIMARYRUN</p>
      <p>Baseline-SVM_TEXT
Baseline-SVM_MULTI
Baseline-MLP_TEXT
Baseline-MLP_IMAGE
HIJLI-JU-CLEF-Multi
Baseline-SVM_IMAGE</p>
      <p>Baseline-MLP_MULTI
HIJLI-JU-CLEF-Bi-LSTM</p>
      <p>Team-Run
the accuracy and efectiveness of fake news detection
systems.</p>
      <p>One key aspect addressed by these studies is the
combination of textual and visual content analysis to identify
patterns, inconsistencies, and misleading information
indicative of fake news. By analyzing the language used
in news articles, social media posts, and the visual
content accompanying them, researchers aim to detect and
characterize fake news more comprehensively.</p>
      <p>In conclusion, our system leveraged the combination
of text and image analysis, employing state-of-the-art
techniques in NLP and computer vision to detect and
verify fake news in a multimodal environment. While
we acknowledge the challenges associated with this task,
our system demonstrates promising results and serves
as a foundation for further advancements in combating
fake news in multimedia contexts.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sliva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          , H. Liu,
          <article-title>Fake news detection on social media: A data mining perspective</article-title>
          ,
          <source>ACM SIGKDD Explorations Newsletter</source>
          <volume>19</volume>
          (
          <year>2017</year>
          )
          <fpage>22</fpage>
          -
          <lpage>36</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Rashkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Jang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Volkova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <article-title>Truth of varying shades: Analyzing language in fake news and political fact-checking</article-title>
          ,
          <source>in: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source>
          , Association for Computational Linguistics, Copenhagen, Denmark,
          <year>2017</year>
          , pp.
          <fpage>2931</fpage>
          -
          <lpage>2937</lpage>
          . URL: https://aclanthology.org/D17-1317. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>D17</fpage>
          -1317.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Vo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>The rise of guardians: Fact-checking url recommendation to combat fake news</article-title>
          ,
          <source>in: The 41st International ACM SIGIR Conference on Research &amp; Development in Information Retrieval</source>
          , SIGIR '18,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>275</fpage>
          -
          <lpage>284</lpage>
          . URL: https://doi.org/ 10.1145/3209978.3210037. doi:
          <volume>10</volume>
          .1145/3209978. 3210037.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <article-title>Predicting image credibility in fake news over social media using multimodal approach</article-title>
          ,
          <source>Neural Computing &amp; Applications</source>
          <volume>34</volume>
          (
          <year>2022</year>
          )
          <fpage>21503</fpage>
          -
          <lpage>21517</lpage>
          . URL: https:// doi.org/10.1007/s00521-021-06086-4. doi:
          <volume>10</volume>
          .1007/ s00521-021-06086-4.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bondielli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dell'Oglio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lenci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Marcelloni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Passaro</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Sabbatini, Multi-fake-detective at evalita 2023: Overview of the multimodal fake news detection and verification task, in: Proceedings of the Eighth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian</article-title>
          .
          <source>Final Workshop (EVALITA</source>
          <year>2023</year>
          ), CEUR.org, Parma, Italy,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Menini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Polignano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Russo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sprugnoli</surname>
          </string-name>
          , G. Venturi,
          <year>Evalita 2023</year>
          :
          <article-title>Overview of the 8th evaluation campaign of natural language processing and speech tools for italian, in: Proceedings of the Eighth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian</article-title>
          .
          <source>Final Workshop (EVALITA</source>
          <year>2023</year>
          ), CEUR.org, Parma, Italy,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maire</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Belongie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Bourdev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Girshick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hays</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Perona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ramanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dollár</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Zitnick</surname>
          </string-name>
          ,
          <string-name>
            <surname>Microsoft</surname>
            <given-names>COCO</given-names>
          </string-name>
          :
          <article-title>common objects in context</article-title>
          ,
          <source>CoRR abs/1405</source>
          .0312 (
          <year>2014</year>
          ). URL: http: //arxiv.org/abs/1405.0312. arXiv:
          <volume>1405</volume>
          .
          <fpage>0312</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sarkar</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. Das</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Pakray</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Pinto</surname>
          </string-name>
          ,
          <article-title>A hybrid sequential model for text simplification</article-title>
          , in: N.
          <string-name>
            <surname>Priyadarshi</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Padmanaban</surname>
            ,
            <given-names>R. K.</given-names>
          </string-name>
          <string-name>
            <surname>Ghadai</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          <string-name>
            <surname>Panda</surname>
          </string-name>
          , R. Patel (Eds.),
          <source>Advances in Power Systems and Energy Management</source>
          , Springer Nature Singapore, Singapore,
          <year>2021</year>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>