<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on NLP applied to Misinformation, held as part of SEPLN</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Overview of NLP-MisInfo 2023: Workshop on NLP applied to Misinformation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roberto Centeno</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rodrigo Agerri</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>HiTZ Center - Ixa, University of the Basque Country UPV/EHU</institution>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universidad Nacional de Educación a Distancia (UNED)</institution>
          ,
          <addr-line>28040 Madrid</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>39</volume>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The 2023 Workshop on NLP applied to MisInformation (NLP-MisInfo 2023) is at its First Edition, held as part of SEPLN 2023: 39th International Conference of the Spanish Society for Natural Language Processing. NLP-MisInfo aims at fostering research both at the theoretical and at the level of practical real-world applications of NLP technologies applied to misinformation mitigation. The workshop aims at bringing together researchers, developers and industries interested in the problem of mitigating misinformation through NLP technologies. We will discuss recent trends and research projects, as well as developments and advances being made in the area of NLP to address the problem of misinformation from diferent perspectives.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Misinformation</kwd>
        <kwd>NLP</kwd>
        <kwd>disinformation</kwd>
        <kwd>fake news</kwd>
        <kwd>Harmful Information Detection</kwd>
        <kwd>fact-checking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The impact of fake news on the global economy, public health and even the creation of panic
in society has been extensively documented in the past few years with countless examples
[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Thus, the high cost associated with the spread of fake news: is the absence of control
and verification of the information, which makes social media a fertile ground for the spread of
unverified or false information. With this in mind, we can afirm that the magnitude, diversity
and substantial dangers of fake news and, in more general terms, the disinformation circulating
on social media is becoming a reason for concern due to the potential social cost it may have in
the near future [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. As a consequence, the research community in the field of Natural Language
Processing has been focusing on the detection and intervention of fake news using techniques
such as Machine Learning and Deep Learning [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ], and taking into account: i) Content-based
features contain information that can be extracted from the text, e.g. linguistic features [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. ii)
Context-based features contain surrounding information such as user characteristics, social
network propagation features, or users’ reactions to the information [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ].
      </p>
      <p>
        These approaches handled the phenomenon from a validity aspect, where they labelled a
claim as “False“ or “True“ [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. However, others tried to tackle it from a stance perspective
trying to determine whether a tweet (or claim in general) is in favour, against, or neither to a
given target entity (person, organization, etc.) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], or even from a social perspective where
how the spreading of misinformation is performed through social networks [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        In spite of the current advances in the field, we, as a society, must be aware not only of fake
news but also of the agents that introduce false or misleading information, their supporting
media, the nodes they use in social networks, the propaganda techniques they use, their
narratives and their intentions [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>Therefore, we must address these challenges, providing new techniques and methods to
really identify and describe the orchestrated disinformation campaigns, such as: detecting
misinformation; claim worthiness checking, stance detection and verified claim retrieval; models
of disinformation propagation, source detection using social network analysis; Identifying its
malicious intent: narratives that want to be spread, benefited and injured agents and final goals;
etc.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Topics of Interest</title>
      <p>All those approaches that can serve, from diferent perspectives, to tackle the misinformation
problem, in general, and by using NLP tools in particular, find their place in NLP-MisInfo 2023.
Specifically, the topics of interest include, but are not limited to:
• Dataset submissions. Present and describe a dataset related to the topic of the workshop
that has been or is being developed.
• Projects submissions. Describe ongoing projects within the workshop’s topic, both
academic and industrial.</p>
      <p>• Original, unpublished contributions are also welcome.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Submissions</title>
      <p>The NLP-MisInfo 2023 Workshop received 7 submissions, of which 6 were accepted. Articles
have been submitted from seven diferent countries, i.e., Spain, Poland, United Kingdom, Saudi
Arabia, Switzerland, Estonia, and France. The accepted articles, collected in these Proceedings,
have primarily addressed two topics. The first issue concerns the usage of NLP techniques for
detecting misinformation; the second issue concerns more general approaches based on research
projects for addressing the misinformation problems from a multi-perspective approach.</p>
      <p>With respect to the first issue, in the article by Álvaro Huertas-García et al., entitled:
“Countering Malicious Content Moderation Evasion in Online Social Networks: Simulation and Detection
of Word Camouflage” , the authors present a set of resources for addressing the misinformation
problem. In particular, it introduces novel methodologies and tools to combat content evasion
in multilingual Natural Language Processing on social networks. A unique Python package,
”pyleetspeak”, is developed, ofering a customizable system for simulating multilingual content
evasion through word camouflage techniques. The study also presents a synthetic multilingual
dataset of camouflaged words, facilitating the training of models for camouflage detection. They
show the utility of the tool in improving content moderation, enhancing online security, and
serving as a potential data augmentation tool for AI systems.</p>
      <p>In this issue, we can find the article entitled: “ELAINE: rELiAbility and evIdence-aware News
vErifier” , by Carlos Badenes-Olmedo et al. It presents ELAINE, a hybrid proposal to detect the
veracity of news items that combines content reliability information with external evidence.
The external evidence is extracted from a scientific knowledge base that contains medical
information associated with coronavirus, organized in a knowledge graph created from a
CORD19 corpus. The information is accessed using Natural Language Question Answering and a set
of evidences are extracted and their relevance measured. By combining both reliability and
evidence information, the veracity of the news items can be predicted, which is very promising
for the veracity detection task.</p>
      <p>With the same objective of improving misinformation detection through NLP techniques, we
ifnd the article entitled: “Where Does It End? Long Named Entity Recognition for Propaganda
Detection and Beyond“, by Piotr Przybyła and Konrad Kaczyński. They investigate how the
extensive span lengths afect the recognition of propaganda, showing that the task dificulty
indeed increases with the span length. They also propose a new solution, including an adaptive
convolution layer that facilitates the sharing of information between distant words. This allows
for improved length preservation without sacrificing overall performance.</p>
      <p>Finally, with regard to the first issue, the article entitled: “Google Snippets and Twitter Posts;
Examining Similarities to Identify Misinformation” by Saud Althabiti et al., investigates the
applicability of Google search and its results as a practical tool for detecting fake news on
platforms like Twitter. The research focuses explicitly on comparing Google search result
snippets with tweets to assess their similarity and determine if such similarity can serve as an
indicator of misinformation. However, the study reveals that the observed similarity between
tweets and snippets does not necessarily correlate with news credibility.</p>
      <p>With respect to the second aspect, research projects about misinformation, two articles
were submitted. The first, entitled: “ERINIA: Evaluating the Robustness of Non-Credible Text
Identification by Anticipating Adversarial Actions” , by Piotr Przybyła and Horacio Saggion,
presents the ERINIA project. This project is aimed to address the challenges posed by the
increasing importance of automatic assessment of text credibility. Text classifiers are commonly
used by platforms hosting user-generated content, including social media, to aid or replace
human moderation in filtering out text that is undesirable for some reason – bullying, hate
speech, fake news, etc. Unfortunately, deep neural networks are known for their vulnerability
to adversarial examples, i.e. data instances with small modifications that preserve the original
meaning, yet change the prediction of the target classifier. Here we describe the research
actions of the ERINIA project, planned to tackle this challenge by assessing the robustness of
currently used classifiers in the misinformation context, creating better methods for discovering
adversarial examples and detecting machine-generated content.</p>
      <p>Finally, the article entitled “HAMiSoN Project” by Anselmo Peñas et al. presents the HAMiSoN
project which aims at treating misinformation from this holistic view. The main challenge
is integrating the message and the network level. To tackle this challenge, they propose to
reveal misinformation’s hidden intents: which agents introduce disinformation in social media,
which narratives they use and which concrete aims (such as polarising, destabilising, generating
distrust, destroying reputation, etc.). They propose also to identify malicious and harmed agents
and provide this information to the final analysts and users in explainable ways. Identifying
misleading messages, knowing their narratives and hidden intentions, modelling the difusion
in social networks, and monitoring the sources of disinformation will also give us the chance to
react faster to the spreading of disinformation.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Keynote Speeches</title>
      <p>As part of the Workshop, two Keynote Speeches were given. The first was centred on the
industry point of view, i.e. how the industry works to mitigate the misinformation. It was
entitled “Understanding the discourse: NLP in the fight against disinformation” , and was given by
Carlos Ponce, IT engineer in the well-known fact-checker Maldita.es1. The second, with an NLP
point of view, entitled “Fake news and conspiracy theories: distinguishing conspiracy narrative
from critical thinking”, was given by Professor Paolo Rosso. Further details are in the following.</p>
      <sec id="sec-4-1">
        <title>4.1. Understanding the discourse: NLP in the fight against disinformation</title>
        <p>Abstract: Disinformation is a global problem: it wins and loses elections, generates fear and
distrust in the population, and afects the security and integrity of people. At Maldita.es
they know this very well, they have been fighting against it and its efects for years. In this
workshop, we will take a practical tour of the workflow and the tools that the Maldita team
relies on to stand up to this battle. We will talk about their use of NLP to engage with their
audience and monitor public discourse and the -sometimes unfathomable- use cases of Machine
Learning in the fight against misinformation and the creation of evidence-based content.</p>
        <sec id="sec-4-1-1">
          <title>Carlos Ponce is a Computer Engineer from the UPM, theatre</title>
          <p>director and development manager at Maldita.es. The Maldita.es
Foundation exists to help citizens make decisions with all the
verified information in hand and so that they do not miss it in
the battle against misinformation. It does this through
journalism, technology, education and new narratives. Misinformation
afects all strata of society and is present in our daily lives; The
Foundation develops tools to combat it and generates
information based on evidence so that the diferent actors involved, from
legislators to content distribution platforms, including journalists, citizens and governments,
have verified data to rely on.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Fake news and conspiracy theories: distinguishing conspiracy narrative from critical thinking</title>
        <p>Abstract: The ease of generating content online has increased the amount of harmful
information that is published. Disinformation is published mostly on social media and
propagated on a daily basis. In this seminar I will try to stress the importance of going
beyond the analysis of (i) words, (ii) textual information, and (iii) fake news. In order to
do that we should: (i) integrate in the architecture of AI deep learning models emotional
signals and psycholinguistics characteristics; (ii) address disinformation detection from a
multilingual perspective; and (iii) consider that often fake news could be part of a conspiracy
theory and a disinformation campaign. Related to the latter, it is important to be able
to distinguish between conspiracy theories and critical thinking. A shared task on this
topic will be organised in 2024 at PAN2, both in Spanish and in English, with data from Telegram.</p>
        <sec id="sec-4-2-1">
          <title>Paolo Rosso is a Full Professor at the Universitat Politécnica</title>
          <p>de Valéncia, where he is also a member of the Pattern
Recognition and Human Language Technology (PRHLT) research centre.</p>
          <p>His research interests are focused on social media data analysis,
mainly on fake news and hate speech detection, author profiling,
and sarcasm detection.</p>
          <p>He has published 50+ articles in journals (34 Q1) and 400+
articles in conferences and workshops; he has an H-index of
69 (source: Google Scholar) and he is in the ranking of the top
H-index scientists in Spain (http://www.guide2research.com/
scientists/ES). He has been PI of several national and
international research projects funded by EC, U.S. Army Research</p>
          <p>Ofice, Qatar National Research Fund, and Vodafone Spain .
2https://pan.webis.de/shared-tasks.html</p>
          <p>Currently, he is the PI of the research project XAI-DisInfodemics on eXplainable AI
for disinformation and conspiracy detection during infodemics (Spanish Ministry of Science
and Innovation), and of the Public Procurement with OBERAXE, the Spanish Observatory on
racism and xenophobia of the Secretary of State for Migration. Moreover, he is a member of
the EC IBERIFIER project on Monitoring the threats of disinformation (European Digital
Media Observatory), the project on Resources and Applications for Detecting and Classifying
Polarized Hate Speech in Arabic Social Media (Qatar National Research Fund).</p>
          <p>He has been advisor of 26 PhD theses and actually he is the advisor of 8 PhD students.
He gave several keynotes (TSD-2020, CICLing-2019 etc.) and has helped organising 30+ shared
tasks at the PAN Lab at CLEF and FIRE evaluation forums, SemEval, IberLEF and Evalita
on topics such as author profiling (e.g. profiling bots, haters, and fake news spreaders), hate
speech detection, irony detection, misogyny, sexism and toxic language identification, as well
as of the MAMI shared task at SemEval 2022 on misogyny identification in memes. He has been
the chair of *SEM-2015, and organised conferences in Valencia such as CERI-2012, CLEF2013,
EACL-2017, and NLDB-2022. He helped as senior chair or track chair in conferences such as
SIGIR, ACL etc.</p>
          <p>Since 2014 he is Deputy Steering Committee Chair of the CLEF Association. He
is also Associate Editor at the Information Processing &amp; Management journal. He
gave several tutorials on plagiarism detection at ICON-2010, author profiling at RuSSIR-2014,
RANLP-2015, FIRE-2016 and CLiC-it-2018, and harmful information (fake news and hate speech)
at CIKM-2020. During the last 10 years, the obtained results in plagiarism detection, irony
detection, author profiling, and credibility detection (fake news) were covered by Spanish (El
País, ABC, La Vanguardia, El Mundo, El Levante, El Confidencial, Radio Nacional de España,
La Cope) and international media (Reforma, Informador, CNN-Español). In 2022 he received
the UPV Research Award in the category of Excellent Publication in Engineering and
Technology for his work on the automatic identification and classification of misogynistic
language on Twitter.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Organizing Team</title>
      <p>The NLP-MisInfo 2023 Organizing Team was composed of the following people with respect to
their distinct roles:
• Two Co-chair Workshop Organizers;
• Fourteen Members of the Program Committee.</p>
      <sec id="sec-5-1">
        <title>5.1. Co-chairs</title>
        <sec id="sec-5-1-1">
          <title>Roberto Centeno is an Associate Professor at the Universidad</title>
          <p>Nacional de Educación a Distancia (UNED), Department of
Languages and Informatics Systems (LSI), Madrid, Spain, where he
has developed his teaching and research career since 2010. In
2012 he obtained his PhD in Computer Science from Rey Juan
Carlos University, where he developed his doctoral thesis as an</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>FPI-MEC fellow from 2007 to 2010. In 2007 he obtained the Ofi</title>
          <p>cial Master’s Degree in Information Technology and Computer Systems and since 2006 he is a
Computer Engineer, both from Rey Juan Carlos University. He is currently a member of the
Language Processing and Information Retrieval Research Group of the UNED, as well as the
Center for Intelligent Information Technologies and their Applications (CETINIA) of the URJC.</p>
          <p>In recent years, his research lines have focused on the areas of misinformation mitigation,
fake news detection and stance on social networks, on reputation and trust mechanisms
based on opinion systems. He is the author of around 20 JCR-indexed publications and
conferences classified as highly relevant and relevant. According to Google Scholar, he
has an h-index of 13 with over 470 citations. He has participated in various international
and national research projects collaborating with several diferent institutions, focused on
the application of artificial intelligence techniques to solve real-world problems. Web site:
http://nlp.uned.es/~rcenteno/index.php</p>
        </sec>
        <sec id="sec-5-1-3">
          <title>Rodrigo Agerri is a Ramon y Cajal Research Fellow (tenure</title>
          <p>track) at IXA Group, part of the HiTZ Centre of the University
of the Basque Country UPV/EHU, where he is head of the Text
Analysis unit. He got a PhD in Computer Science at City,
University of London (2007), and he has since been working on Natural
Language Processing at several British and Spanish institutions,
including a two-year stint in the industry as a research project
director. He has been involved as PI or collaborator in more than
40 research projects funded by the European Commission, UK
research councils, Spanish Ministry of Science and Basque
Goverment and published in major journals (Artificial Intelligence,
etc.) and conferences (ACL, EMNLP, EACL, IJCAI, etc.) related
to Artificial Intelligence and Natural Language Processing.</p>
          <p>Currently, his research is focused on Computational Semantics
and Information Extraction, with a strong focus on multilingual and cross-lingual approaches.
He was the creator and main developer of IXA pipes, a set of ready-to-use multilingual tools for
linguistic processing. He is also PMC and committer in the OpenNLP project of the Apache
Software Foundation. Web site: https://ragerri.github.io/
5.2. Program Committee
• Óscar Araque, GSI, Universidad Politécnica de Madrid (UPM)
• Carlos Badenes-Olmedo, Ontology Engineering Group (OEG), Universidad Politécnica
de Madrid (UPM)
• David Camacho, Applied Intelligence &amp; Data Analysis group, Universidad Politécnica
de Madrid (UPM)
• Jorge Carrillo-de-Albornoz, NLP &amp; IR, Universidad Nacional de Educación a Distancia
(UNED)
• Pablo Hernandez, Maldita.es
• Manuel Montes, Laboratory of Language Technologies of the Computational Sciences</p>
          <p>Department (INAOE), México
• Borja Lozano, Newtral
• Laura Plaza, NLP &amp; IR, Universidad Nacional de Educación a Distancia (UNED)
• Anselmo Peñas, NLP &amp; IR UNED, Universidad Nacional de Educación a Distancia
(UNED)
• Álvaro Rodrigo, NLP &amp; IR, Universidad Nacional de Educación a Distancia (UNED)
• Paolo Rosso, PRHLT Research Center, Universitat Politècnica de València (UPV).
• Fernando Sánchez, GSI, Universidad Politécnica de Madrid (UPM)
• Estela Saquete, Natural Language Processing and Information Systems Group, University
of Alicante
• Mariona Taule Delor, CLiC- The Language and Computation Center-CLiC, University
of Barcelona</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We would like to thank the authors of the submitted articles for their interest in the considered
problem, the Keynote Speakers for the interest aroused in new research directions, and the
members of the Program Committee for their valuable contribution to the success of the
NLPMisInfo 2023 Workshop.</p>
      <p>Roberto Centeno would like to acknowledge DeepInfo project (PID2021-127777OB-C22)
(MCIU/AEI/FEDER,UE) and the CHIST-ERA HAMiSoN project grant
CHIST-ERA-21-OSNEM002 by AEI PCI2022-135026-2 funded by the Spanish Research Agency (Agencia Estatal de
Investigación).</p>
      <p>Rodrigo Agerri would like to acknowledge the Basque Government (Research group funding
IT-1805-22) and the following MCIN/AEI/10.13039/501100011033 projects: (i) DeepKnowledge
(PID2021-127777OB-C21) and ERDF A way of making Europe; (ii) Disargue
(TED2021-130810BC21) and European Union NextGenerationEU/PRTR. Furthermore, Rodrigo Agerri is funded by
the RYC-2017-23647 fellowship (MCIN/AEI/10.13039/501100011033 and ESF Investing in Your
Future).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Zaeem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. S.</given-names>
            <surname>Barber</surname>
          </string-name>
          ,
          <article-title>On sentiment of online fake news</article-title>
          ,
          <source>in: 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>760</fpage>
          -
          <lpage>767</lpage>
          . doi:
          <volume>10</volume>
          .1109/ASONAM49781.
          <year>2020</year>
          .
          <volume>9381323</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Ghorbani</surname>
          </string-name>
          ,
          <article-title>An overview of online fake news: Characterization, detection, and discussion</article-title>
          ,
          <source>Inf. Process. Manage</source>
          .
          <volume>57</volume>
          (
          <year>2020</year>
          ). URL: https://doi.org/10.1016/j.ipm.
          <year>2019</year>
          .
          <volume>03</volume>
          . 004. doi:
          <volume>10</volume>
          .1016/j.ipm.
          <year>2019</year>
          .
          <volume>03</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. T.</given-names>
            <surname>Aghayan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gupta</surname>
          </string-name>
          , Misinformation detection in online content,
          <year>2020</year>
          . US Patent App.
          <volume>16</volume>
          /019,
          <fpage>898</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. K.</given-names>
            <surname>Kaliyar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Misinformation detection on online social media-a survey</article-title>
          ,
          <source>2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT)</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . URL: https://api.semanticscholar.org/CorpusID:209695880.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-R.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <article-title>Motivations, methods and metrics of misinformation detection: An nlp perspective</article-title>
          ,
          <source>Natural Language Processing Research</source>
          <volume>1</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . URL: https://doi.org/10.2991/nlpr.d.
          <volume>200522</volume>
          .001. doi:
          <volume>10</volume>
          .2991/nlpr.d.
          <volume>200522</volume>
          .001.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Oshikawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>A survey on natural language processing for fake news detection</article-title>
          ,
          <year>2020</year>
          . arXiv:
          <year>1811</year>
          .00770.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <source>The Routledge Handbook of Chinese Applied Linguistics</source>
          ,
          <volume>32</volume>
          , Routledge, London, UK,
          <year>2019</year>
          , p.
          <fpage>16</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ihsan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ayub</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Shivakumara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Noor</surname>
          </string-name>
          ,
          <article-title>Fake news detection techniques on social media: A survey</article-title>
          ,
          <source>Wireless Communications and Mobile Computing</source>
          <year>2022</year>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          . doi:
          <volume>10</volume>
          .1155/
          <year>2022</year>
          /6072084.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>N. R.</surname>
          </string-name>
          de Oliveira,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Pisa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Lopez</surname>
          </string-name>
          , D. S. V. de Medeiros,
          <string-name>
            <surname>D. M. F. Mattos</surname>
          </string-name>
          ,
          <article-title>Identifying fake news on social networks based on natural language processing: Trends and challenges</article-title>
          ,
          <source>Information</source>
          <volume>12</volume>
          (
          <year>2021</year>
          ). URL: https://www.mdpi.com/2078-2489/12/1/38. doi:
          <volume>10</volume>
          .3390/ info12010038.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vosoughi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Roy</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Aral,</surname>
          </string-name>
          <article-title>The spread of true and false news online</article-title>
          .,
          <source>Science</source>
          <volume>359</volume>
          (
          <year>2018</year>
          )
          <fpage>1146</fpage>
          -
          <lpage>1151</lpage>
          . doi:
          <volume>10</volume>
          .1126/science.aap9559.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dungs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fuhr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Bontcheva</surname>
          </string-name>
          ,
          <article-title>Can rumour stance alone predict veracity?</article-title>
          ,
          <source>in: Proceedings of the 27th International Conference on Computational Linguistics</source>
          , Association for Computational Linguistics, Santa Fe, New Mexico, USA,
          <year>2018</year>
          , pp.
          <fpage>3360</fpage>
          -
          <lpage>3370</lpage>
          . URL: https://aclanthology.org/C18-1284.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>Spread of misinformation on social media: What contributes to it and how to combat it, Computers in Human Behavior (</article-title>
          <year>2022</year>
          )
          <fpage>107643</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. T.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Why people spread rumors on social media: developing and validating a multi-attribute model of online rumor dissemination</article-title>
          ,
          <source>Online Inf. Rev</source>
          .
          <volume>45</volume>
          (
          <year>2021</year>
          )
          <fpage>1227</fpage>
          -
          <lpage>1246</lpage>
          . URL: https://doi.org/10.1108/OIR-08-2020-0374. doi:
          <volume>10</volume>
          .1108/ OIR-08-2020-0374.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>