<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>MisinfoMe: Who is Interacting with Misinformation? ?</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Knowledge Media institute, The Open University</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Misinformation is a persistent problem that threatens societies at multiple levels. In spite of the intensi ed attention given to this problem by scientists, governments, and media, the lack of awareness of how someone has interacted with, or is being exposed to, misinformation remains to be a challenge. This paper describes MisinfoMe; an application that collects ClaimReview annotations and source-level validations from numerous sources, and provides an assessment of a given Twitter account, and the followed accounts, with regards to how much they interact with reliable or unreliable information.</p>
      </abstract>
      <kwd-group>
        <kwd>Misinformation Twitter Fact Checking ClaimReview</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Misinformation is an old problem, ampli ed in recent years by social media.
Much research tackled this problem from social as well as technical perspectives,
to better understand the phenomenon, how it spreads online, how it can be
automatically detected, its impact on opinions, etc. Nevertheless, research shows
that making people aware of how they interact with misinformation remains to be
a challenge [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In spite of this, there is hardly any research and technologies that
aim at raising awareness of how a given user and their network have interacted
with false or unreliable information [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>This paper introduces MisinfoMe;1 an application that uses ClaimReview
semantic annotations and other information validators to highlight the tweets
that point to reliable or unreliable sources and content. MisinfoMe also expands
the assessment to the Twitter users followed by the given one, to re ect the
amount of misinformation they pass on to the given user. More speci cally, the
objectives of the MisinfoMe tool are:
1. Create a (mis)information collection of 68K URLs and 1.3K domains,
generated by 86 fact-checking organisations,
2. Collect the Twitter timeline of a given user and the accounts they follow,
? Copyright c 2019 for this paper by its authors. Use permitted under Creative
Commons License Attribution 4.0 International (CC BY 4.0).
1 http://socsem.kmi.open.ac.uk/misinfo/
3. Highlight tweets with URLs pointing to false/true/mixed information,
4. Compare the given user with the \average user" with respect to the level of
interaction with reliable and unreliable information,
5. Highlight the levels of misinformation interaction by followed twitter
accounts and rank them accordingly.</p>
      <p>In the following sections we brie y describe related work, the dataset used
by MisinfoMe, and the main components of the application.</p>
      <p>Demo: MisinfoMe will be demoed at the conference. Users can enter a Twitter
handle to see the assessment of their levels of misinformation interaction, and
the tweets that belong to each category (misinforming, valid, mixed). A video
recording of the demo is available at: https://tinyurl.com/yxeun2nt and the
full application is accessible at: http://socsem.kmi.open.ac.uk/misinfo/
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>Several applications have been developed to help validating di erent types of
content (images, bot accounts, news sources, reviews, etc.). NewsGuard (https:
//newsguardtech.com) is a browser plugin that uses journalists to assess the
reliability of websites (e.g., BBC, Wikipedia, Breitbart) following generic
credibility and transparency indicators. B.S.Detector (tinyurl.com/y368cbhw) is
another browser plugin that o ers validations of news sources. ClaimBuster
(idir.uta.edu/claimbuster/) detects and ranks claims, and matches them
to fact-checks. Botometer (botometer.iuni.iu.edu) calculates the likelihood
of a Twitter account being a bot based on various activity patterns. TinEye
(tineye.com) provides a reverse image search to help track out of context
image usage. Rbutr (rbutr.com) connects web pages that present disputes or
contradictions, provided by the user community, and provide them to users when
browsing such pages. Fakespot (fakespot.com) assesses the validity of online
reviews on sites such as Amazon and Tripadvisor.</p>
      <p>Unlike the tools above, MisinfoMe is focused on revealing how a given Twitter
account, and the accounts it follows, have been interacting with misinformation.
MisinfoMe does not o er new content validations per se, but rather it uses
thirdparty fact-checking and source validations to assess all the URL-bearing Tweets
in the timeline of the accounts in question.</p>
    </sec>
    <sec id="sec-3">
      <title>3 MisinfoMe</title>
      <p>Below we brie y describe the main components of MisinfoMe and its user
interface. An in-depth description can be found on the MisinfoMe site itself.
3.1</p>
      <sec id="sec-3-1">
        <title>Validation Dataset</title>
        <p>MisinfoMe collects ClaimReview2 annotations (see example below) from
different fact-check publishers and some aggregators.3 ClaimReview is a
standard schema used for fact-checking reviews of claims, and from it we collect</p>
        <sec id="sec-3-1-1">
          <title>2 https://schema.org/ClaimReview 3 https://www.datacommons.org/factcheck/download</title>
          <p>
            MisinfoMe: Who is Interacting with Misinformation?
the URLs (if present) of the fact-checked article, and the validation outcome
(rating). We then add other publicly available source-assessments, e.g., from
OpenSources,4 Wikipedia,5 and BuzzFeedNews,6 combined with the approach
described in [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ]. A complete list of the sources used is on the MisinfoMe site
at http://socsem.kmi.open.ac.uk/misinfo/about. In total, MisinfoMe
collected 34764 ClaimReviews so far, constructing a dataset of 68K URLs and
1.3K domains, generated by 86 fact-checking organisations.
{
"@context": "http://schema.org",
"@type": "ClaimReview",
"author": { "@type": "Organization", "url": "https://www.politifact.com/"},
"claimReviewed": "President Donald Trump hasnt condemned David Duke and Richard Spencer.",
"datePublished" : "2019-08-26",
"itemReviewed": { "@type": "Claim", "firstAppearance": {
          </p>
          <p>"url" : "https://www.facebook.com/TND/videos/358402408404240/" } },
"reviewRating": { "alternateName": "Mostly False", "ratingValue": "3", },
"url": "https://www.politifact.com/truth-o-meter/statements/2019/aug/27/</p>
          <p>joe-biden/biden-wrong-when-he-says-trump-hasnt-condemned-dav/" }
3.2</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Assessing Twitter Timelines</title>
        <p>Given a Twitter handle, MisinfoMe uses the Twitter API to collect its timeline.
URLs appearing in the tweets are then matched with the dataset described
above. MisinfoMe displays how many tweets were collected, how many contain
a URL (Figure 1), and how many of these URLs point to reliable, unreliable, or
mixed pages and sources. To unshorten the URLs, we perform a HTTP HEAD
request and follow the HTTP redirections. We also extract the domain name of
each URL to compare with domain-level validations.</p>
        <p>As shown in Figure 2, users can scroll the list of tweets that were assessed, see
evidence to support their placement into the reliable, unreliable, or mix category,
and the source of this evidence (e.g., Snopes.com). A bar-chart comparison of the
given Twitter account with the average of over 770 randomly selected account,
is also shown on MisinfoMe (not shown in gures).
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Assessing Social Network</title>
        <p>In addition to assessing the given Twitter account with regards to sharing
misinformation, MisinfoMe also assesses the accounts followed by the given one with</p>
        <sec id="sec-3-3-1">
          <title>4 http://www.opensources.co/ 5 https://en.wikipedia.org/wiki/List_of_fake_news_websites 6 https://github.com/BuzzFeedNews/2018-12-fake-news-top-50</title>
          <p>regards to their spread of misinformation, and ranks them accordingly. Figure 3
shows a selection of the accounts followed by the given account, colour-coded
to re ect their tendency to sharing reliable (green), unreliable (red), or mixed
(orange) information. Grey indicate that none of their shared URLs were found
in our dataset.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions and Future Work</title>
      <p>This paper introduced MisinfoMe; an application for assessing the sharing of
unreliable information posted by a Twitter account and the accounts it follows.
We described the data collection and the analysis performed, and the main user
interface components. We are currently planning several user-based experiments,
to evaluate the interface, the datasets, and to assess the impact of MisinfoMe
on people's awareness of their interactions with misinformation, or of those they
are interested in (e.g., celebrities, politicians).</p>
      <p>Acknowledgment Co-Infom, H2020, grant agreement 770302.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Fernandez</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alani</surname>
          </string-name>
          , H.:
          <article-title>Online Misinformation: Challenges and Future Directions</article-title>
          .
          <source>In Companion Proc. The Web Conference</source>
          , Lyon, France (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Southwell</surname>
            ,
            <given-names>B. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thorson</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sheble</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Misinformation and mass audiences</article-title>
          . University of Texas Press (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Mensio</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alani</surname>
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>News Source Credibility in the Eyes of Di erent Assessors</article-title>
          . In Conference for Truth and Trust
          <string-name>
            <surname>Online</surname>
          </string-name>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>