<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>The Italian Conference on CyberSecurity, May</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Graph for Fact-Checking: Google Map Hacks Debunking</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Giuseppe Fenza</string-name>
          <email>gfenza@unisa.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vincenzo Loia</string-name>
          <email>loia@unisa.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paola Montserrat Mainardi</string-name>
          <email>p.mainardi2@studenti.unisa.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Claudio Stanzione</string-name>
          <email>stanzione.dottorando@casd.difesa.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Misinformation, Fact-Checking, Debunking, OSINT</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Defence Analysis &amp; Research Institute, Center for Higher Defence Studies</institution>
          ,
          <addr-line>00165 Rome (RM)</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Management &amp; Innovation Systems, University of Salerno</institution>
          ,
          <addr-line>84084 Fisciano (SA)</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>0</volume>
      <fpage>3</fpage>
      <lpage>05</lpage>
      <abstract>
        <p>The continuing spread of disinformation on the Web has caused a great demand for automatic factchecking tools to help institutions and practitioners check the truthiness of web content. Existing literature presents numerous solutions for specific tasks, such as textual fact-checking or image/video truthiness, that, together with the availability of Open Source Intelligence (OSINT) tools and principles, paves the way for new comprehensive solutions. This work introduces a Knowledge Graph-based approach for fact-checking and news debunking. The idea is to map fact-checking workflow activities leveraging OSINT to specific scenarios emerging from Web and social media monitoring. The reference Knowledge Graph is constructed by analyzing sources through Text Mining and Semantic Analysis techniques. Finally, a real case study is carried out to show the applicability of the approach for factchecking purposes.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Fact-checking techniques may be employed to prevent disinformation proliferation and
consequential anxiety among people. In detail, these activities are thought to clarify the
primarily false information presented and thus force the recipient to think more deeply about
the published facts [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. To prove content falsity, there is the need to collect further data related
to the news and compare them to spot some inconsistencies. News debunking is intended as a
multi-level task since it requires not only checking the truthiness of news/post content but also
dates, location, source and provenance. In this regard, some OSINT tools and services may help
the fact-checkers seeking information on the Web. For instance, Google Maps, with its Street
View, gives a chance to acquire images of a cited location to check elements in images posted;
Google Lens allows finding elements from posted images, such as features suggesting places
or pictures posted by other users or sites with meaningful information that can support the
analysis.
      </p>
      <p>Notwithstanding this digital support, proving the truth of content may be a very stressful
task to fulfill due to the efort spent by humans in checking multiple features related to the
news and choosing which aspect to focus on first or which data to search for to spot distinctive
elements for the analysis. Therefore, questions arise: What are the elements (e.g., provenance,
source, content, etc.) to be checked first? What kind of data should fact-checkers consider
and retrieve to prove fact truthiness? Could the activities be organized in a unique method to
perform reliable fact-checking?</p>
      <p>
        This study tries to answer these questions by describing and applying a cognitive approach
depicting the various stages to consider, data to acquire, and diferent OSINT technologies to
perform fact-checking based on previous knowledge about similar facts or events. The approach
leverages a Knowledge Graph [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], constructed through Text Mining and Semantic Analysis of
considered pieces of content, and that guides the selection of suitable fact-checking activities.
The functioning of the proposed model is proved by carrying out a complete case study about
real news regarding an experiment performed by artist Simon Weckert consisting in tricking
Google Maps service with simulated trafic congestion. The paper contribution consists of a
cognitive approach for fact-checking content based on a Knowledge Graph. In particular:
• The KG exploits Text Mining and Semantic Analysis;
• The KG suggests suitable actions (among ones in the fact-checking workflow) based on
the experience;
• The KG, at the same time, harvests experts’ feedback to feed the KG;
• A new ontological model (Debunking Model) is defined for representing domain-specific
concepts;
• The Debunking Model helps in identifying inconsistencies among pieces of evidence;
• The Semantic Analysis consists of inheriting and combining existing ontologies to one
depicted in this work.
      </p>
      <p>• The approach’s applicability has been proven through a real example of fact-checking.
The rest of the paper is organized as follows: Section 2 provides related work, Section 3
introduces the proposed cognitive approach for fact-checking, and Section 4 demonstrates its
potential in a real case study. Conclusions close the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Many approaches explored fact-checking techniques to discover diferent kinds of disinformation
forms. Some works, in particular, focussed on debunking rumors on social networks to analyze
behaviors [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] or exploring specific features, such as the unmasking of fake reviews [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] or the
employment of denials to increase sharing and spread potentials [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Other works focussed on
the interplay between rumor spreading and debunking, ending up with a model determining
when a debunking application is needed or most efective [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Other types of research focus on designing automated or semi-automated tools for
factchecking, such as [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] that introduce a model based on semantic similarity and natural language
inference to perform multilingual fact-checking and hoax spread monitoring. Other solutions
explore knowledge-based systems to deal with fact-checking, such as in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which presents a
semantic structure that exploits a knowledge graph for fact-learning and verification. In other
cases, the idea consists of comparing diferent sources with diferent credibility levels [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], or
analyzing possible biases in news perceptions and exploring how partisan leanings influence
the news selection algorithm for fact-checking [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        Some other works employed knowledge to achieve more robust fact-checking, such as [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
proposes an approach combining Wikidata5M knowledge graph and Wikipedia documents
to incorporate external knowledge into the claim. In addition, the paper by [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] introduces a
hybrid fake news detection system combining linguistic and knowledge-based features to spot
fake news on social networks.
      </p>
      <p>Contrary to the existing literature, this work proposes a cognitive approach whose Knowledge
Graph (KG) suggests suitable actions (among ones in the fact-checking workflow) based on
the experience and, at the same time, harvests experts’ feedback to feed the KG. Moreover, the
proposed approach is applied to an existing case study to check its suitability in debunking and
fact-checking.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Methodology</title>
      <p>This paper proposes a methodology consisting of constructing and maintaining a Knowledge
Graph (KG) to suggest suitable fact-checking activities among the ones described by the
Factchecking workflow. The KG is fed with data taken from the Web and social media (see Figure 1),
as well as experts’ feedback.</p>
      <p>The following subsections detail the KG and the workflow used in the proposed methodology.</p>
      <sec id="sec-3-1">
        <title>3.1. Knowledge Graph for Fact-Checking</title>
        <p>As already stated, the proposed methodology is based on a Knowledge Graph (KG) supplied
by suggestions, doubts, or skepticism noticed on the Web or social media. The idea consists
of identifying sources of interest (e.g., Facebook pages, Twitter accounts, etc.) and constantly
monitoring their evolution, mainly about specific topics. Relevant recommendations can be
extracted by scraping such pages and applying Natural Language Processing (NLP) techniques
combined with a semantical analysis. Recommendations conveniently processed and
conceptualized populate the KG and give directions for subsequent fact-checking activities. Moreover,
the experts’ choices and decisions further contribute to KG’s growth and update. For example,
suppose the expert realizes a correlation between a news aspect and a workflow phase: such
intuition feeds the graph.</p>
        <p>The graph construction leverages NLP techniques and the Debunking Model for the semantical
analysis. Debunking Model consists of an ontological model relating the main aspects involved
in the workflow to interface and analyze the data collected about the article or post (i.e., container,
content, user profiles, etc.). The ontology is built over state-of-the-art ontologies (i.e., OWL 1,
OWL time2, etc.) to model knowledge about the people writing and sharing web contents,
temporal and spatial information related to the contents and people’s activities, respectively.
New classes and properties added to existing ontologies to represent the main aspects involved
in fact-checking and debunking activities are depicted in Figure 2 and summarized as follows:
• Class Container and its subclass Website represent the web space where contents are
shared.
• Class Publication represents the posted content that may be an article on a news site
(class Article) or a post on a social network (SocialNetwork class). Elements of a post or
an article, such as text, photos, links and videos, are represented as instances of Piece of
Content.
• Class Account represents people that wrote and shared specific content (e.g., an article, a
post) on a Website.</p>
        <p>Moreover, Figure 2 also demonstrates a simplified example of ontology instantiation based
on the case study described in Section 4.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Fact-checking Workflow</title>
        <p>
          Fact-checking experts, mainly from important news agencies, shared numerous hints aimed
at evaluating content trustworthiness. In particular, the suggested workflow by Urbani [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]
described in this section exploits Open Source INTelligence (OSINT) fundamentals. OSINT
tools, by collecting and analyzing publicly available information, can assist in, for example,
certifying the ownership of an image or identifying a location in it. The process mainly consists
of verifying a piece of content’s provenance, source, date, location, and motivation. In particular:
• The provenance ensures to refer to the original article or piece of content.
• Found the source refers to identifying who created the original piece of content.
• The date refers to when the content was created.
• The location identifies where the piece of content was captured.
        </p>
        <p>• The motivation aims to consider what caused the capturing of the piece of content.</p>
        <p>Each pillar contributes to a better understanding of the content and its reliability. The
following subsections give details about each pillar.</p>
        <p>Finding the provenance of content means examining its original form to more easily
understand who posted it, when, where, and why. Techniques to discover the original content
depend on the type of content. For example, the Reverse Image Search, consisting of searching
for the content in large databases (e.g., Google Images), can be a solution for images. In the
case of videos, a frame from a video can be subjected to a reverse image search through the
Reverse Video Search. In some situations, finding the original content through other strategies
1http://owl.man.ac.uk/2006/07/sssw/people.owl
2https://www.w3.org/TR/owl-time/
is dificult; searching for it in more private and anonymous locations could be helpful. Some
examples are Reddit3, 4chan4, Discord5, and, where appropriate, Twitter and Facebook.</p>
        <p>The source of content (i.e., owner) can be a valid indication of its reliability. However, since
everyone can re-post a piece of content written or captured by someone else on the Internet, it
is mandatory to identify its real ”owner”.</p>
        <p>Once the first uploader is found, we should understand if the content is coherent with
the authors’ geographic position, other shared contents, and so on. In particular, it could be
interesting to investigate authors’ social accounts, make reverse image searches of account
images, search for shared posts in Google to understand if there is embedded content, and
so forth. In addition, checking if declared email addresses are associated with any user (for
example, through Skype) could help determine the source’s credibility and, for example, make
sure it is not an automated account (i.e., a bot). In this regard, specific techniques can be used
(e.g., learning models), or attention can be paid to the volume of daily posts and whether a
period of silence is associated with nighttime rest.</p>
        <p>Finding the date means determining when the original content was created. A starting
point is referring to timestamps associated with a post or file metadata, for example, in the
case of image files, the Exif (Exchangeable image file format). However, since these types of
metadata are not always available, some further checks could include observing the video/image
to understand the period of the year. In this sense, handy tools are:
• SunCalc6. It allows viewing the sun’s angle on a specific day at a particular location,
which can help identify the time associated with an event in a photo or video.
• Wolfram Alpha7. It is a computational knowledge engine that, among other things,
enables you to look up the weather for a specific date. This way, a check can be done
between the declared date and the weather for that date.</p>
        <p>Similar problems with date reconstruction may arise with location identification since
geotags are not always available and may not accurately reflect the location provided in the content.
Finding specific details in the image or video and doing research using satellite imagery may be
helpful for this aim. Examples include looking for squares, signs, flags, banners, etc., to attempt
and associate a location with the picture or video. Also, spoken language and clothing could be
helpful. However, special attention should be paid to the level of update of images and the latest
events in the area by considering the most recent local events (e.g., war or extremely severe
weather) that may significantly impact the terrain.</p>
        <p>About motivation, it is dificult to identify common hints: the process strictly depends
on the considered piece of content. However, in general terms, it could be helpful to find
involved people’s afiliations or communities and, where possible, try to speak directly with
them. Motivation could be better depicted by extracting the context of the news facts by
analyzing comments to the post/news, tweets and other web resources referencing the article
considered.
3https://camas.github.io/reddit-search/
4https://4chansearch.com/
5https://discord.com/
6https://www.suncalc.org/
7https://www.wolframalpha.com/</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Case Study</title>
      <p>
        The KG-based approach described in Section 3 has been demonstrated in a case study based
on an experiment by the German artist Simon Weckert, who walked in empty streets of Berlin
bringing a little wagon containing 99 smartphones, each running a GPS Maps service. The artist
aimed to simulate a fake trafic congestion event by exploiting the high number of devices, their
closeness and the slow movement of the wagon. The experiment was introduced by the artist
in an article called “Google Maps Hack” on his personal blog8 and reported in the book “How
Algorithms Create and Prevent Fake News” [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The objective of the case study is to question some technical aspects of the experiment that
the author does not fully describe. Following the approach proposed in Section 3, we tried to
extract useful information to understand the feasibility of the experiment. In particular, the
case study starts with identifying relevant suggestions about the experiment itself on the Web
to realize what pillars of the workflow are bringing up. Moreover, each pillar investigation
contributes to populating the Debunking Model, as depicted in Figure 2. Finally, the KG turns
on incoherence among findings.</p>
      <p>The following subsections detail each pillar analyzed.</p>
      <sec id="sec-4-1">
        <title>4.1. Provenance</title>
        <p>News about the experiment appeared on many news sites and blogs; however, we focused on
original pictures and information posted on Weckert’s site.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Source</title>
        <p>The source of information is the artist himself, who posted information about the experiment
on his personal site. However, through cross searches, we also identify his Twitter account9,
from which we also find the date of publication of the experiment itself. The tweet posting the
experiment had a lot of resonance. In particular, other Twitter users’ comments questioned
technical modalities of the experiment. For example, they expressed doubts about adopted
devices, Internet connections and attainability (see Figure 3); Weckert did not clarify all factors.
Following, this aspect is further examined.</p>
        <p>Since the pictures depicting adopted smartphones (see Figure 4) are insuficient to discover
device details, we explored the Web to find other helpful information. In a video posted by
Arte TV10, we catch the image in Figure 5 from which smartphones are better visible. The first
remark regards the diference between smartphone screens in Figure 4 and Figure 5: in this
latter, despite the lack of light concerning the experiment execution, screens are darker and less
visible.</p>
        <p>In order to acquire more information about devices, we searched for similar images and
obtained the smartphone model through Google Lens. It is a Huawei Mate 20 Pro, a particularly
expensive model released in 2018. Therefore, the hypothesis that Weckert bought all these
8https://www.simonweckert.com/googlemapshacks.html
9https://twitter.com/simon_deliver
10https://www.arte.tv/de/videos/102583-000-A/simon-weckert-arte-tracks/
expensive devices is hard to believe. Instead, many suppose he rented or purchased second-hand
devices; anyway, he does not provide further details about them.</p>
        <p>Another crucial element to prove experiment truthiness is geo-localization. Since the
smartphones are all grouped in the wagon, they have very close GPS coordinates or even the same,
which is not the case of a trafic congestion scenario but more similar to an accident with 99
vehicles involved. Therefore, even though fascinating and unreal, the 99-car accident thesis
must be rejected since the cart moves and crashed cars do not move.
4.3. Date
From Weckert’s website, it is unclear when the experiment was done. Unique available
information is the date of the news posting on Twitter (i.e., 02/01/2020). Thus, through the
FotoForensics11 tool, we extracted image metadata that, for pictures in Figure 6 (i.e., not
subsequently modified), reports October 06, 2019, as the creation date. For images in Figure 7, only
the modification date was available, i.e., October 14, 2019.</p>
        <p>By a search on Wolphram Alpha, at the given date, the weather is congruent with the
conditions in the photos. Moreover, the sun’s direction, evaluated through sunCalc, is congruent
with the lights in the pictures.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.4. Location</title>
        <p>Regarding location identification, we know a priori that Weckert conducts the experiment
in Berlin, Germany. However, in the experiment description, the exact covered distance and
the main streets visited are reported as unclear. So, to further analyze the feasibility of the
experiment, we realized a focused analysis to reconstruct its path, as described below.</p>
        <p>By leveraging picture metadata, we know the chronological sequence of pictures, as depicted
in Figure 6. Moreover, from the images in Figure 7, we can extract traversed streets (that,
through searches on Google Street View, also match with associated images):
• An der Schillingbrücke
• Michaelbrücke
• Ziegelstraße and Ebertbrücke.
• Mittelstraße
• Geschwister-Scholl-Straße.</p>
        <p>Discovering locations reported in Figure 6’s pictures needs an in-depth analysis made through
the synergy between Google Lens and Google Street View. In particular, we can recognize, in
chronological order:</p>
        <p>The third location in Figure 6 is impossible to identify since there is no recognizable place.</p>
        <p>The longest path among cited places is between three and four kilometers (depending on
the type of selected path). In this sense, let us notice that on maps shown on Weckert’s site,
with trafic mode on, the path is shown as full green lines along the roadside, as reported in
the leftmost image in Figure 8. Since this configuration is present in maps showing paths by
feet, we assume that Google Maps has detected the smartphones as people walking on the road;
otherwise, the result would have been as the one shown in in the right image in Figure 8 (i.e.,
the smartphones would have been taken as 99 vehicles). Although the possibility of managing
fake accounts, according to Google Maps, which collects data during the trip (i.e., the type of
device employed and the mean speed), the service would not have mistakenly taken the 99
smartphones as 99 vehicles. Moreover, the hypothesis that the service would have considered
the 99 devices as people seems not credible either.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.5. Motivation</title>
        <p>Simon Weckert, through his site, declares his fascination for the digital world and its reflection
on social aspects. He aims to evaluate the worth of technology from the perspective of future
generations. The artist’s philosophy aligns with the nature of the experiment.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The paper presents a cognitive approach for fact-checking web content based on a Knowledge
Graph leveraging OSINT tools and principles and a domain ontology. In particular, the method
considers state-of-the-art tips and associated instruments for acquiring and fact-checking
information. Acquired data is conveniently annotated and analyzed at each workflow stage to
determine inconsistencies between diferent aspects (i.e., website, account profiles, articles and
pieces of content), allowing smart fact-checking activities. The proposed approach was applied
to a real case study concerning an experiment performed by artist Simon Weckert consisting
in tricking Google Maps service with simulated trafic congestion. The case study aims to
demonstrate the practical potential of the proposed methodology to support practitioners and
fact-checkers in ascertaining the truthiness of web content through a multi-aspect fact analysis.</p>
      <p>In the future, it would be interesting to automatize the whole workflow supporting the
technical tasks (e.g., KG implementation and querying), as well as content identification, extraction
and annotation.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This work was partially supported by project SERICS (PE00000014) under the MUR National
Recovery and Resilience Plan funded by the European Union - NextGenerationEU.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Giansiracusa</surname>
          </string-name>
          ,
          <article-title>How Algorithms Create and Prevent Fake News: Exploring the Impacts of Social Media, Deepfakes</article-title>
          , GPT-3, and More, Springer,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Flaherty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Sturm</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Farries,</surname>
          </string-name>
          <article-title>The conspiracy of covid-19 and 5g: Spatial analysis fallacies in the age of data democratization</article-title>
          ,
          <source>Social Science &amp; Medicine</source>
          <volume>293</volume>
          (
          <year>2022</year>
          )
          <fpage>114546</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kvetanová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Predmerská</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Švecová</surname>
          </string-name>
          ,
          <article-title>Debunking as a method of uncovering disinformation and fake news, Fake News Is Bad News-Hoaxes, Half-truths and the Nature of Today's Journalism (</article-title>
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fenza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gallo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Loia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Marino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Orciuoli</surname>
          </string-name>
          ,
          <article-title>A cognitive approach based on the actionable knowledge graph for supporting maintenance operations</article-title>
          ,
          <source>in: 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <article-title>Rumor spreading model with considering debunking behavior in emergencies</article-title>
          ,
          <source>Applied Mathematics and Computation</source>
          <volume>363</volume>
          (
          <year>2019</year>
          )
          <fpage>124599</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Bangerter</surname>
          </string-name>
          , G. Fenza,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gallo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Genovese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Nota</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stanzione</surname>
          </string-name>
          , G. Zanfardino,
          <article-title>Unmask inflated product reviews through machine learning</article-title>
          ,
          <source>in: 2021 IEEE International Conference on Computational Intelligence</source>
          and
          <article-title>Virtual Environments for Measurement Systems and Applications (CIVEMSA)</article-title>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Chua</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. H.-L. Goh</surname>
          </string-name>
          ,
          <article-title>Debunking rumors on social media: The use of denials</article-title>
          ,
          <source>Computers in Human Behavior</source>
          <volume>96</volume>
          (
          <year>2019</year>
          )
          <fpage>110</fpage>
          -
          <lpage>122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <article-title>Reciprocal spreading and debunking processes of online misinformation: A new rumor spreading-debunking model with a case study, Physica A: Statistical Mechanics and its Applications 565 (</article-title>
          <year>2021</year>
          )
          <fpage>125572</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Martín</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huertas-Tato</surname>
          </string-name>
          , Á. Huertas-García,
          <string-name>
            <given-names>G.</given-names>
            <surname>Villar-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Camacho</surname>
          </string-name>
          , Factercheck:
          <article-title>Semi-automated fact-checking through semantic similarity and natural language inference</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>251</volume>
          (
          <year>2022</year>
          )
          <fpage>109265</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Zeng</surname>
          </string-name>
          ,
          <article-title>Knowledge structure driven prototype learning and verification for fact checking</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>238</volume>
          (
          <year>2022</year>
          )
          <fpage>107910</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>C. De Maio</surname>
            , G. Fenza,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Gallo</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Loia</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Volpe</surname>
          </string-name>
          ,
          <article-title>Cross-relating heterogeneous text streams for credibility assessment</article-title>
          ,
          <source>in: 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS)</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Babaei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kulshrestha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. M.</given-names>
            <surname>Redmiles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Gummadi</surname>
          </string-name>
          ,
          <article-title>Analyzing biases in perception of truth in news stories and their implications for fact checking</article-title>
          ,
          <source>IEEE Transactions on Computational Social Systems</source>
          <volume>9</volume>
          (
          <year>2021</year>
          )
          <fpage>839</fpage>
          -
          <lpage>850</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <article-title>Knowledge enhanced fact checking and verification</article-title>
          ,
          <source>IEEE/ACM Transactions on Audio, Speech, and Language Processing</source>
          <volume>29</volume>
          (
          <year>2021</year>
          )
          <fpage>3132</fpage>
          -
          <lpage>3143</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>N.</given-names>
            <surname>Seddari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Derhab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Belaoued</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Halboob</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Al-Muhtadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bouras</surname>
          </string-name>
          ,
          <article-title>A hybrid linguistic and knowledge-based analysis approach for fake news detection on social media</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>62097</fpage>
          -
          <lpage>62109</lpage>
          .
          <source>doi:1 0 . 1 1 0</source>
          <string-name>
            <given-names>9</given-names>
            <surname>/ A C C E S S</surname>
          </string-name>
          .
          <volume>2 0 2 2 . 3 1 8 1 1 8 4 .</volume>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Urbani</surname>
          </string-name>
          , Verifying online information,
          <year>2020</year>
          . URL: https://firstdraftnews.org
          <article-title>/ long-form-article/verifying-online-information/.</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>