<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Vishal Gupta. Recent Trends in Text Classification Tech-
niques. International Journal of Computer Applications (0975 -
8887) Volume 35- No.6</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Combining Machine Learning with Knowledge Engineering to detect Fake News in Social Networks-a survey</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sajjad Ahmed</string-name>
          <email>ahmed.sajjad@unicam.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Knut Hinkelmann</string-name>
          <email>knut.hinkelmann@fhnw.ch</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Flavio Corradini</string-name>
          <email>flavio.corradini@unicam.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Camerino</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>FHNW University of Applied Sciences and Arts Northwestern Switzerland Riggenbachstrasse 16</institution>
          ,
          <addr-line>4600 Olten</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>35</volume>
      <fpage>25</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>Due to extensive spread of fake news on social and news media it became an emerging research topic now a days that gained attention. In the news media and social media the information is spread highspeed but without accuracy and hence detection mechanism should be able to predict news fast enough to tackle the dissemination of fake news. It has the potential for negative impacts on individuals and society. Therefore, detecting fake news on social media is important and also a technically challenging problem these days. We knew that Machine learning is helpful for building Artificial intelligence systems based on tacit knowledge because it can help us to solve complex problems due to real word data. On the other side we knew that Knowledge engineering is helpful for representing expert‟s knowledge which people aware of that knowledge. Due to this we proposed that integration of Machine learning and knowledge engineering can be helpful in detection of fake news. In this paper we present what is fake news, importance of fake news, overall impact of fake news on different areas, different ways to detect fake news on social media, existing detections algorithms that can help us to overcome the issue, similar application areas and at the end we proposed combination of data driven and engineered knowledge to combat fake news. We studied and compared three different modules text classifiers, stance detection applications &amp; fact checking existing techniques that can help to detect fake news. Furthermore, we investigated the impact of fake news on society. Experimental evaluation of publically available datasets and our proposed fake news detection combination can serve better in detection of fake news.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Fake news and the spread of misinformation have dominated
the news cycle after US Presidential elections in 2016. Some
reports show that Russia has created millions of fake accounts
and social bots to spread false stories during the electionsF
        <xref ref-type="bibr" rid="ref1">(Lewandowsky 2017)</xref>
        .Various motivations are observed for
spreading fake news and generating these types of information
on social media channels. Some of them are to gain political
gains or ruin someone else's reputation or for seeking
attention. Fakenews is a type of yellow journalism or propaganda
that consists of deliberate misinformation or hoaxes spread via
traditional print and broadcast news media or online social
media.
      </p>
      <p>
        The importance of fake news can easily be understood as per
the report published by PEW Research Centre (Rainie et al.
2016). The statistics shows that 38% of adults often get news
online, 28% rely on website/apps &amp; 18% rely on social media.
Overall 64% of adults feel that fake news causes a great deal of
confusions. The importance of fake news can also be judged
through below diagram shows dramatically fake news gained
worldwide popularity in 2016 after US presidential elections.
changes the way people interpret and respond to real news.
For example, some fake news was just created to trigger
people‟s distrust and make them confused, impeding their
abilities to differentiate what is true and what is not true
        <xref ref-type="bibr" rid="ref3 ref5">(Scott
et al., 2000; Leonhard et al., 2017; Himma 2017)</xref>
        . It is important
to understand that fake and deceptive news have existed for a
long time. It‟s been part of the conversation as far as the birth
of the free press (Soll 2016). There are various approaches for
automated fake news detection: Text Classification, Stance
Detection, Metadata &amp; Fact Checking.
mention and also we can check the time of the news as
whether the same news appears in other media or sourced if
it is repeated more often in the beginning, because they are
interesting, and become recognized as fake with the time,
which reduces the repetition or they are deleted from some
websites. At this stage we don‟t have definitive solution but
after detailed literature review we can say that it‟s true that
producing more reports with more facts can be useful for
helping us to make such decisions and find technical
solutions in fake news detection.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Data Driven:</title>
      <p> Text classification: They mainly focus on extracting various
features of text and after that incorporating of those
features into classification models e.g. Decision tree, SVM,
logistic regression, K nearest neighbor. At the end
selection of best algorithm that performs well (Nidhi et al. 2011).</p>
      <p>
        Emergent1 is a real time data driven rumor identification
approach. It works automatically to track rumors
associated with social media but those rumors where human
input require has not been automated. The problem is that
most classification approaches are supervised so we need
prior dataset to train our model but as we discussed that
obtaining a reliable fake news dataset is very time
consuming process.
 Stance detection: False news has become an important task
after the US 2016 Presidential elections. Governments,
Newspapers and social media organizations are working
hard to separate the fake contents and credible. So the first
step in identification phase is to understand that what
others are saying about the same topic
        <xref ref-type="bibr" rid="ref7">(Ferreira et al. 2016)</xref>
        .
      </p>
      <p>
        So far as fake news challenge initially to focus on stance
detection. In stance detection we check the estimation of
relativity of two different text pieces on the same topic and
        <xref ref-type="bibr" rid="ref35 ref8">stance of others (Saif et al. 2017</xref>
        ). PHEME2 was a three
years research project funded by the European
Commission from 2014-2017, studying natural language
processing techniques for dealing rumor detection, stance
classification
        <xref ref-type="bibr" rid="ref10 ref9">(Lukasik et al. 2015 ; Zubiaga et al. 2016)</xref>
        ,
contradiction detection and analysis of social media
rumors. Existing stance detection approaches based on
embedding features on individual posts to predict stance of
that particular content.
 Meta-data: We can analyze fake news differently with
different measure similarities e.g. Location, Time, author and
Quality. we can detect whether the same news published by
other media agencies or not, We can check the location of
the news Maybe a news has a higher probability of being
fake, if it is generated somewhere else and not at the
location they deal with (e.g. Trump writes about China or
Arabian States, News about Hillary Clinton has its origin in
Russia), We can check news quality wise it is more proba- 3 www.snopes.com
ble that fake news do not have mentioned their sources,
simply claim something, while for real news the source is
      </p>
    </sec>
    <sec id="sec-3">
      <title>Knowledge Engineering:</title>
      <p>
         Fact Checking techniques mainly focus on to check the
fact of the news on the basis of the known facts. There
are three types of fact checking techniques available
Knowledge Linker (Ciampaglia et al. 2015), PRA (Lao et
al. 2011), and PredPath
        <xref ref-type="bibr" rid="ref13">(Shi et al. 2016)</xref>
        . Then the
Predictions algorithms that using knowledge to check the fact
are Degree Product
        <xref ref-type="bibr" rid="ref13">(Shi et al. 2016)</xref>
        ,
        <xref ref-type="bibr" rid="ref14">L. Katz (1953</xref>
        ),
Adamic &amp; Adar
        <xref ref-type="bibr" rid="ref15">(Adamic et al. 2003)</xref>
        and Jaccard
coefficient (Liben et al. 2016). Some fact checking
organizations providing online fact checking services e.g.
Snopes3, PolitiFact4 &amp; Fiskkit5. Hoaxy6 is another plate
form for fact checking. Collection, detection and analysis
and to check online misinformation is part of Hoaxy. The
criteria they followed is to check the news is fake or not
fake simply they refer it to the domain experts,
individuals or organizations on that particular topic. They also
followed non partisan information and data sources (e.g.,
peer-reviewed journals, government agencies or
statistics).
      </p>
    </sec>
    <sec id="sec-4">
      <title>Discussion</title>
      <p>
        Our main research question that how would one distinguishes
between fake and non fake news articles using data driven and
knowledge engineering. The facts show that the fake news
phenomenon is an important issue that requires scholarly
attention to determine how fake news diffused. Different groups
introduced different models some of them applied data
oriented and some applied only knowledge side. The important
point is the speed of spreading of these types of information on
social media networks is challenging problem that require
attention and alternative solution. If news is detected fake the
existing techniques blocked them immediately due to their
functionally as we can‟t replace them but if a news detected
fake at least we need some experts opinion or verification
before blocking that particular news. This thing helps to rise the
third party fact checking organizations to come and solve the
issue but that is also time consuming process. We need some
1 www.emergent.info
2 www.pheme.eu
4 www.politifact.com
5 www.fiskkit.com
6 https://hoaxy.iuni.iu.edu/
application that check the news whether it is fake or not at the the participants who had read the false news or stories
consesame place. cutively five weeks believe false stories more truthful and
The existing fake news systems based on the predictive models more plausible as compare to the participants who had not
that simply classify that the news is fake or not fake. Some been exposed
        <xref ref-type="bibr" rid="ref18">(Hasher et al. 1977)</xref>
        .
models used source reliability and network structure so the big News can be true if the information it expresses that is more
challenge in those cases is to train the model due to non avail- familiar. Familiarity means automatic consequences of
expoability of corpora it is impossible. sure so it will influence on truth and that is fully unintentional.
Due to surge of fake news problem and overcoming the chal- In those cases where the source or the agency that circulated
lenges discussed above a volunteer based organization Fake- stories warns that source may not be credible, people did not
NewChallenge7 that contains 70 teams which organizes specif- stop to believe on that story due to the familiarity
        <xref ref-type="bibr" rid="ref19">(Begg et al.
ically machine learning competitions to the detection of fake 1992)</xref>
        . Another study that contains half statements showing in
news problem. the experiments were true and half were false but the results
At the end we can say that there is a need of an alternative ap- shows that the participants like repeated statements although
plication that combine knowledge with data and automation of they were false but due to the familiarity they rated as more
fact checking is required which looks content of the news true than the stories they heard first time
        <xref ref-type="bibr" rid="ref20">(Bacon et al. 1979)</xref>
        .
deeply with expert opinion at the same place to detect the fake Monitoring of source is itself an ability to check and identify
news. the news origin we read. Some studies clearly indicate that the
The rest of this paper is divided into four sections. Section 2 participants use familiarity to understand the source of their
contains background, impact on society, News content models, memory. Another study that proposed general knowledge and
related work and similar application areas; Section 3 describes semantic memory does not focus on conditions but it only
Methodology, Proposed combination approach and publically helps a person when and where he learned this information.
available data set we used for initial classification. Our conclu- Similarly a person may have some knowledge about an event
sions and future directions are presented in Section 4. but not remember the event so it comes from memory
        <xref ref-type="bibr" rid="ref21">(Potts et
al. 1989)</xref>
        .
      </p>
    </sec>
    <sec id="sec-5">
      <title>Literature Review</title>
      <p>In this section we try to cover all the topics that are related to
our topic and can be helpful for better understanding of fake
news detection. At the beginning we discuss that the trust level
of the readers on online news media. Then we discuss the
impact of fake news on society and then different types of news
models. Then we discuss related work and similar applications
areas where some of the researchers applied data driven and
some applied knowledge side to overcome the specific
problem in that particular domain.</p>
    </sec>
    <sec id="sec-6">
      <title>Background</title>
      <p>
        Internet gave opportunity to everyone enter online news
business because many of them were already rejected the
traditional news sources that had gained high level of public trust and
also credibility of the work. According to a survey general
trust on mass media collapsed as lowest in the history of this
business. Especially in political right 51% democrats and 14%
republican in USA expressing a great deal and trust in mass
media as news source
        <xref ref-type="bibr" rid="ref17">(Lazer et al. 2018)</xref>
        .
      </p>
      <p>It has come to known that the information repeated again is
more likely to be rated true than the information that has not
been heard before. Familiarity with false news would increase
with truthfulness. Further this thing did not stop here as the
false stories would result to create the false memory. The
authors first observed “illusory-truth effect” and gave the results
that subject rated repeated statements truer as compare to the
new statements. They present a case study with the results that
7 http://www.fakenewschallenge.org/</p>
    </sec>
    <sec id="sec-7">
      <title>Overall Impact on different areas</title>
      <p>
        News is a real time situation and a comprehensive story that
covers different issues like criminology, health, sports, politics,
business etc. Local news agencies mostly focus on the specific
regional issues and international news agencies covers both
local and global news. Finding a particular story on the basis of
reader‟s choice is an important task. Different methods
proposed in this study that how can we overcome the issue and
follow the reader‟s choice
        <xref ref-type="bibr" rid="ref22">(Zhai et al. 2005)</xref>
        . Hot topic detection
in a local area during a particular period based on micro blogs
containing difference in words but pointing towards same topic
using twitter and Wikipedia.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Business</title>
      <p>In online news media the services and total number of users
are important to gain more business. Some big names who are
earning a lot due to high number of users and circulation of
fake news are Facebook, Twitter, Google and Search engines
also fake news producers and consumers. Fake news growing
dramatically day by day and its impact on society is very bad.</p>
    </sec>
    <sec id="sec-9">
      <title>Social Networks</title>
      <p>
        After US Presidential elections social media facing pressure
from general public and civil society to decrees fake news on
their platforms. It‟s a very difficult task to combat fake news
and especially when no proper check and balance and sharing
policies are available. Articles that go viral on social media can
draw significant revenue through advertising when users click
and directly redirected on that page. But the question is how
we can measure the importance of social media networks for
fake news suppliers so one possibility to measure through the
source of their web traffic. Every time when a user visits the
web page that user has navigated directly through server or it
referred to some other site
        <xref ref-type="bibr" rid="ref23">(Allcott et al. 2017)</xref>
        .
      </p>
      <p>
        One focused area that really helpful to detect fake articles on
Facebook is the fact checking organizations. According to the
Facebook that they are taking all steps to overcome the issue
on their platform and make it as much as difficult as possible
to buy ads on platform for people who really want to share
fake contents. Better identifying false news with the help of
community and third party fact checking organizations and
some stance detections mechanisms is possible because they
can limit the spread speed of fake contents and they can make
it uneconomical
        <xref ref-type="bibr" rid="ref24">(Mosseri 2017)</xref>
        .
      </p>
      <p>
        Single users have the same facility that they will get a message
that some people do not agree with the article content. The
regular users are not in a position to judge the validity of the
links they can see. So this thing might be unreliable for
Facebook flagging functions
        <xref ref-type="bibr" rid="ref25">(Wohlsen 2015)</xref>
        .
      </p>
      <p>The second focused area is that a flag that is available with the
fake news article. Simply users can click upper right corner of
that post. The more times that particular post flagged by the
users that it is false then less often it will show up in news feed
tab. According to the Facebook policy that they would not
delete that flagged post but they end up with disclaimer with
the statement “Many people on Facebook have reported that
this story contains false‟‟ Stanford History Education Group
(2016).</p>
      <p>Due to the sensitivity of the issue Facebook sends a flagged
post to the third party who is responsible to check the fact
about that post. If fact checking organizations marked it
disputed then automatically users will see a banner under the
article if it appears in user news feed area. That banner will
clearly explain the situation that third party organization
disputed it and a link is available. Another thing is that those
disputed stories pushed down in news feed and a message will
appear before sharing from any user that if they sure about it
then they can share it (Guynn 2017). Relying on users is not a
permanent and good solution, but the idea is just to educate the
users and if they consent on it then they can share it. If every
user take care about this then fake news would not be as big
problem.</p>
      <p>
        To check the level of truth in articles is very difficult as they
differ in some points but in a very professional manner. That‟s
why only the best way is to understand that the management of
Facebook that they need to educate their users about sharing
policy. Every user need to understand that before sharing any
information on Facebook they must be sure about it
        <xref ref-type="bibr" rid="ref27">(Dillet
2017)</xref>
        . Facebook management and responsible persons claims
that they have an algorithm that helping by rooting out fake
articles. The algorithm shows the users about that article before
sharing, source, date, topic and number of interactions.
When we compare with Twitter, fake news shared by the real
account holders to some small websites and highly active
„cyborg‟ users (Silva et al. 2016). They are very professional and
sometimes these professional groups evolved to be
industrialized by states and terrorist organizations. These groups called
Troll farms and according to one study that they have potential
algorithm to track down in Twitter (Nygren et al. 2016).
      </p>
    </sec>
    <sec id="sec-10">
      <title>Security Agencies</title>
      <p>Misinformation or propaganda has always been used to affect
people and create fear for opponent. We can categorize it in
three types. White propaganda is that where we knew the
initiator and the news circulated by that particular person or
group is true. Black propaganda is that where we don‟t know
the source and also the news shared by that person or groups is
totally false.</p>
      <p>Grey type is that which is between the white and black. During
the cold world war the objective of these types of activities is
to sway the opinions just to hide and distorted facts from
hidden senders. A big example of this type of propaganda
happened from 2002 to 2008 when United States Military
Department recruited approximate seventy five pensioned officers
just to propagate on media on Iraq‟s possible ownership of
weapons. The objective of this activity is to weaken the public
of the opponent who supports them and strengthen the own
support. The work had done through different sources e.g.
radio, newspapers and TV channels that hide the connections
(Nygren et al. 2016).</p>
      <p>When we compare this with the earlier variants of propaganda
due to society needs because today it is possible for everyone
to reach large audience within seconds but in past it was not
possible. So it means we are more reliant on information that
affects more. Some other actors also involved in this campaign
and they can easily affect the facts, those are diplomatic
persons, military economic state actors and public relation
departments. An independent body can easily control these types
of activities as compare to the state controls everything. A big
example of this disinformation is Ukraine‟s crisis 2014, where
a state invaded another country territory and misleads about it.
Due to this it affects badly the world‟s response. We knew that
this is not only one thing that spreading lies but also other
activities that linked to it are involved. In next section we
discussed different types of news content models one by one with
examples.</p>
    </sec>
    <sec id="sec-11">
      <title>News Content Models</title>
      <p>
        In Content modeling we identify our requirements, develop
taxonomy (Classification system) that meets those
requirements and consider where metadata should be allowed or
required.
News content models can be categories in knowledge based
and style based but due to enhancement in social media it
provides additional resources to the researchers to supplement and
enhance news content models like Social Context Models,
Stance Based &amp; Propagation Based. The main focus of news
content modeling is on news content features and especially
factual sources to detection of fake and real text
        <xref ref-type="bibr" rid="ref28">(Wang 2017)</xref>
        .
In next section we discussed news content models and existing
applications comes under their domain with examples.
      </p>
    </sec>
    <sec id="sec-12">
      <title>Knowledge-based:</title>
      <p>The objective of Knowledge based approach is to use external
sources to fact check in news content and the goal of fact
checking is assign a truth value to a claim in particular (Vlachos
et al. 2014). When we read literature it has come to our
knowledge that fact checking in fake news detection area gained
high attention. That‟s the reason many efforts have been made
to develop some feasible automated fact checking systems.</p>
      <p>Since fake news attempts to spread false news contents on
social media networks and also news media, so straightforward
means detection of those false claims and check the
truthfulness of those news. We can categorize existing fact checking
applications in three parts expert oriented, crowd sourcing
oriented and computational oriented.</p>
      <p> Expert Oriented
We need highly domain experts in expert oriented fact
checking that can investigate data and documents to verdict the
claims. The famous fact checking applications are Snopes8 &amp;
PolitiFact9. Expert oriented fact checking is very demanding
but it‟s also time consuming process. As soon as they receive
new claim they consult domain experts, journals or statistical
analysis already available in that particular domain. It took so
much time so we need to develop a new classification
approach that can help to detect fake news in a better way and
timely.</p>
      <p>New fact check mechanisms that can help readers after
critically evaluate the news before judgment by using fact checking.</p>
      <p>
        The objective of this work is not to provide results that the
content is fake or not fake instead of provide mechanism for
critically evaluation during news reading process. Reader starts
reading of the news and fact check technique will provide
facility to the reader that at the same time read all related or
linked stories just for critical evaluation. They used scoring
measure formula that displays the related stories of the scoring
measure threshold but if the scoring measure below the
threshold it will not display on correspond
        <xref ref-type="bibr" rid="ref49">ing fact check page
(Guha 2017</xref>
        ).
      </p>
      <p>
        Three generally agreed upon characteristics of fake news: Text
of an article, user response and the source that needs to be
incorporate at one place and after that they proposed a hybrid
model. First module captures the abstract temporal behavior of
8 www.snopes.com
9 www.politifact.com
the users, measure response and text. Then the second
component score estimates source for every user and then combined
with the first module
        <xref ref-type="bibr" rid="ref42">(Ruchansky et al. 2017)</xref>
        . At the end the
proposed model allows CSI to output prediction separately
shown in figure-3.
 Crowd sourcing Oriented
In crowd sourcing approach it gives option to the users to
discuss and annotate the accuracy of specific news. So in other
words we can say it‟s fully rely on the wisdom of crowd to
enable fact checking on the basis of their knowledge. Fiskkit10
is a big example of this type of fact checking as it provides
facility to the users to discuss and annotate the accuracy of
news article. Another anti fake news detection application that
provide facility to detect fake articles and further it gives
facility to users to report suspicious news contents so that the
editors will check it further. After taking motivations from
Facebook flag method with the involvement of public and
leveraging crowd signals for detecting fake contents
        <xref ref-type="bibr" rid="ref54">(Potthast et al.
2016)</xref>
        . An algorithm named Detective was developed as it
checks run time flagging accuracy with Bayesian inference
method. This algorithm selects small subsets of news everyday
and send back to the expert and on the basis of expert response
it stops that fake news.
      </p>
      <p> Computational Oriented
Computational fact checking aims to provide users an
automatic system that can classify true and false contents. Mostly
computational fact checking works on two points that identify
check worthy claims and then discriminate the veracity of fact
claims.</p>
      <p>
        It works on the key basis and viewpoints of users on the
specific content
        <xref ref-type="bibr" rid="ref31">(Houvardas et al. 2006)</xref>
        . Open web and structured
knowledge graphs are the big examples of these types of
computational oriented fact checking. Open web sources are
utilized as referenced that can differentiate the news true an
        <xref ref-type="bibr" rid="ref16">d
false (Banko et al., 2007</xref>
        ; Magdy et al., 2010).
      </p>
      <p>
        Separation of fake contents in three categories: serious
fabrication, large scale hoaxes and humorous fake was the main
objective of this work. They provide a way to filter, vet and
veri10 www.fiskkit.com
fying the news and discussed in details the pros and cons of fake news frames with filtering due to its potential of mislead
those news (Rubin, V et al., 2015). to the readers
        <xref ref-type="bibr" rid="ref43">(Rubin et al. 2016)</xref>
        .
      </p>
      <p>
        This study is data oriented application simply they used
available dataset and then applied deep learning method and finally
they proposed a new text classifier that can predict whether the
news is fake or not
        <xref ref-type="bibr" rid="ref46">(Bajaj 2017)</xref>
        . Dataset used for this project
was drawn from two different publically accessible
websites1112
Traditionally all rumor detection techniques based on message
level detection and analyzed the credibility on the basis of data
but in real time detection based on the keywords then the
system will gather related micro blogs with the help of data
acquisition system which solves this problem.
      </p>
      <p>The proposed model combines user based, propagation based
and content based models and check real time credibility and
sends back the response within thirty five seconds (Zhou et al.
2015).</p>
    </sec>
    <sec id="sec-13">
      <title>Style-based:</title>
      <p>In style based approach fake news publishers used some
specific writing style necessary to appeal a wide scope that is not
available in true news article. The purpose of this activity is to
mislead or distorted or influence large population.</p>
      <p>
        Categorization of News sources into two categories: writing
quality and strong sentiment is the main point as real news
sources have higher writing quality (taking into account:
Misspelled Words, Punctuation &amp; sentences length) as compare to
the fake news articles that are likely to be written by
unprofessional writers. On the other side real news sources are appear
unbiased or neutral words, describing events with facts. So the
development of classifier and compare it with other
classification methods is the main focus area for fake content
identification
        <xref ref-type="bibr" rid="ref41">(Fan et al. C 2017)</xref>
        .
      </p>
      <p>It is hard to pin down satire in the scholarly literature (Nidhi et
al. 2011). Another study that proposed a method that can first
translated the theories of humor, irony and satire into a
predictive method for satire detection. Conceptual contributions of
this work are to link satire, irony and humor. Then target the
11 www.kaggle.com
12 https://research.signalmedia.co/newsir16/signal-dataset.html</p>
    </sec>
    <sec id="sec-14">
      <title>Social Context Models:</title>
      <p>
        Social media provides additional resources to the researchers
to supplement and enhance news context models. Social
models engagements in the analysis process and capturing the
information in different forms from a variety of perspectives.
When we check the existing approaches we can categories
social modeling context in stance based and propagation based.
One important point that we need to highlight here that only a
few existing social context models approaches utilized for fake
news detection. So we will try with the help of literature those
social context models that used for rumor detection. Proper
assessment of fake news stories shared on social media
platforms and identification of fake contents automatically with
the help of information sources and social judgment on the
basis of Facebook data is the main point of this work. During
2016 US President Elections they examines that machine
learning classifiers can be helpful to detect fake news
        <xref ref-type="bibr" rid="ref47">(Tresh et
al. 1995)</xref>
        .
      </p>
      <p>
         Stance-based
It is a process that can determine the results from news that the
reader is in favor or against or neutral of that particular news
(Saif et al. 2017). There are two ways to represent the user
stances explicitly or implicitly. Explicit stances are those
stances where the readers gave direct expressions like thumb
up or thumb down. Implicit stances are those stances where
results extracted from social media posts. Overall we can say
that stance detection is a process where automatically
determining from user posts that‟s the majority of users or in favor
or against
        <xref ref-type="bibr" rid="ref37">(Qazvinian et al., 2011; Jin et al., 2016)</xref>
        proposed a
model to check the viewpoint of users and then on the basis of
viewpoint to learn the credibility of posts. (Tecchini et al.
2016) proposed a bipartite network of users on Facebook posts
using „like‟ stance. On the basis of the results we can predict
likelihood of Facebook users.
      </p>
      <p>
        Stance detection of headlines based on n-gram matching for
binary classification “related” vs. “unrelated” pairs. This
approach can be applied detection of fake news especially
clickbait detection. They used dataset released by the organization
Fake News Challenge (FNC1) on stance detection for
experiments
        <xref ref-type="bibr" rid="ref45">(Bourgonje et al. 2017)</xref>
        . The dataset is publically available
and can be downloaded from the corresponding GitHub page
along with base line implementation. Key points of the dataset
can be seen in the below figure-4.
 Propagation-based
In propagation based approach homogeneous and
heterogeneous credibility networks built for propagation. Homogeneous
propagation that contains single entities like post or event but
heterogeneous credibility network contains multiple entities
like posts, events and sub events
        <xref ref-type="bibr" rid="ref37 ref38">(Jin et al 2016; Zhiwei et al
2014; Gupta et al 2012)</xref>
        . In propagation based approach we
check the interrelation of relevant events on social media posts
to detect the fake news and the credibility of that news.
Another study that helps to build three layer network after including
sub events then we can check the credibility of news with the
help of graph optimization framework
        <xref ref-type="bibr" rid="ref38">(Jin et al. 2014)</xref>
        .
Propagation based algorithm for users encoding that can check the
credibility and tweets together (Gupta et al. 2014)
      </p>
    </sec>
    <sec id="sec-15">
      <title>Similar Application Areas</title>
      <p>In this section we will discuss similar application areas to the
problem of fake news detection. Some applications used data
side and some are related to the knowledge side. They perform
good results in specific domain but they require high efforts
during development so the combination with knowledge
engineering it can be helpful to reduce the efforts. At the end we
discussed some other data driven applications (table-1) and
few where the combination of data driven and knowledge
exists (table-2).</p>
    </sec>
    <sec id="sec-16">
      <title>Truth Discovery/Hot Topic Detection</title>
      <p>
        Truth discovery plays a distinguished role in information age
as we need accurate information now more than ever. In
different application areas truth discovery can be beneficial
especially where we need to take critical decisions based on the
reliable information extracted from different sources e.g.
Healthcare
        <xref ref-type="bibr" rid="ref50">(Yaliang et al. 2016)</xref>
        , crowd sourcing
        <xref ref-type="bibr" rid="ref51">(Tschiatschek et
al. 2018)</xref>
        and information extraction
        <xref ref-type="bibr" rid="ref52">(Highet 1972)</xref>
        . Some cases
we have the information but we are unable to explain so those
cases knowledge engineering can take part and we can better
predict as per the learning from the previous results.
      </p>
    </sec>
    <sec id="sec-17">
      <title>Rumor Detection</title>
      <p>
        Objective of Rumor detection is to classify a piece of
information as rumor or non rumor. Four steps are involved model
Detection, Tracking, Stance &amp; Veracity that can help to detect
the rumors. These posts considered the important sensors for
determining the authenticity of rumor. Rumor detection can
further categories in four subtasks stance classification,
veracity classification, rumor tracking, rumor classification
        <xref ref-type="bibr" rid="ref53">(Arkaitz et
al. 2017)</xref>
        . So still few points that require more details to
understand the problem and also we can learn from the results that is
it actually rumor or not and if its rumor then how much it is.
So for these questions we believe that combination of data and
knowledge side is required to explore those areas that still
unexplainable.
      </p>
    </sec>
    <sec id="sec-18">
      <title>Clickbait Detection</title>
      <p>
        Attract visitor‟s attention and encourage them clicking on a
particular link is the main objective in clickbait business.
Existing Clickbait approaches utilize various extraction features
from teaser messages, linked WebPages, tweets Meta
information
        <xref ref-type="bibr" rid="ref54">(Martin et al. 2016)</xref>
        . So in same case we can notify the
readers before reading any kind of news that it could be fake
due to some specific indications so the readers need to be more
careful.
      </p>
    </sec>
    <sec id="sec-19">
      <title>Email Spam Detection</title>
      <p>
        Spam detection in email is one of the major problem that
bringing financial damage to the companies and also annoying
individual users. Different groups are working with different
approaches to detect spam in email and different machine
learning approaches are very helpful for spam filtering.
Spam causes different problems as broadly we discussed on
top but more precisely spam causes misuse of traffic,
computational power and storage space. This study also explains that
many other different techniques can be helpful for spam
detection like Email filtering, Blacklists unauthorized addresses,
White lists, Legal actions and many more
        <xref ref-type="bibr" rid="ref55">(Siponen et al. 2006)</xref>
        .
Below in two tables we just gave an overview of data and
knowledge side and specific application domain where they
applied to resolve the issue.
pervised or unsupervised methods. Those approaches are not
providing good results due to non availability of gold standard
data set that can help to train and evaluate the classifier and
produce good results (Subhabrata et al. 2015). It is the fact that
motivations and psychological states of mind of people can be
different from the professionals in the real world. Different
groups are working now to combat this hot issue and for that
purpose they are thinking to utilize actual dataset rather than
opinions, blogs. To tackle the problem fake news detection we
need to incorporate both behavioral and social entities and to
combine knowledge and data. In this chapter we try to discuss
all possible types of fake news and impact of that news
socially so on the basis of literature evaluation we can say that it is
also possible to detect fake news with different known facts
like time, location, quality, and stance of others. With these
types of measure similarities we can detect the quality of news.
In next chapter we will discuss proposed combination
statistical analysis of publically available dataset just for
understanding the issue more deeply.
      </p>
    </sec>
    <sec id="sec-20">
      <title>Method</title>
      <p>Learning from data and engineered knowledge to overcome
fake news issue on social media. To achieve the goal a new
combination algorithm approach (Figure-8) shall be developed
which will classify the text as soon as the news will publish
online.</p>
      <p>A Dual process model of defence Pyszozynski
against conscious and unconscious et al.
death related thoughts: An
extension of terror management theory</p>
      <p>Table 2: Combinations of Data Driven and Knowledge
1999</p>
    </sec>
    <sec id="sec-21">
      <title>Discussion</title>
      <p>We discussed different approaches that have been defined in
the last few years to overcome the problem of fake news
detections in social networks. Most of the approaches based on
suIn developing such a new classification approach as a starting
point for the investigation of fake news we first applied
publically available data set for our learning. The first step in fake
news detection is classifying the text immediately once the
news published online. Classification of text is one of the
important research issues in the field of text mining. As we knew
that dramatic increase in the contents available online gives
raise problem to manage this online textual data. So it is
important to classify the news into the specific classes (Fake, Non
fake, unclear).</p>
      <p>For training and understanding the classifier we try publically
available dataset13 that is based on the collection of
approximate seventeen thousand news articles extracted from online
news organizations: Location of the article (country);
publication details (Organization, author, date, unique id) Text (Title,
full article, online link) &amp; Classification details.</p>
      <p>Classification of millions of news that published online ma- In this section we discussed step by step process that how can
nually is time consuming and expensive task. So before going we combine learning from data and engineered knowledge in
to the automatic text classification we need to understand dif- order to combat fake news detection in social networks. Once
ferent text classification techniques (Nidhi et al. 2011). the news will publish online then classifier will classify the
text into the classes fake, non fake and unclear. After text
clasSelection/Collection of news articles sification then we will check the stance of that particular news
which categories the news into four categories agree, disagree,
discuss and not related. In the next step we will apply fact
checking that will refine our results as fact checking uses
engineered knowledge in order to analyze the content of the text
and it will compare it to the known facts (see figure).</p>
    </sec>
    <sec id="sec-22">
      <title>Data extraction and analysis</title>
      <p>The dataset was already sorted qualitatively by the different
categories like fake, not fake, bias, conspiracy &amp; hate. Further
we classify data with different result indicators (replies,
participants, likes, comments and total number of shares). In next
step we will show the outcomes of that dataset that will help
us to understand the process.</p>
    </sec>
    <sec id="sec-23">
      <title>Results Extracted from dataset and Future goal</title>
      <p>The details of the dataset with classified attributes mentioned
above in collection tab but here in the figure-9 we just
highlighted the results we obtained that how can we specify
claims that can be helpful in combination of proposed
techniques. From 17946 news articles, 12460 articles were biased
category, 572 were fake articles, 870 articles were conspiracy
category and 2059 were non-fake articles.
Our proposed combination diagram contains two parts data
and knowledge part that further classification of these two can
be seen in the diagram. Data side contains text classification
and stance detection while knowledge side contains fact
checking that will help us to refine the results. We categorizer our
task in three parts and at the end we will combine the results to
check the news status fake or not fake.</p>
    </sec>
    <sec id="sec-24">
      <title>Discussion</title>
      <p>13 https://www.kaggle.com/mrisdal/fake-news/version/1
 When the news published online our proposed
classifier will check the similarity between words, text
and overall similarity. As per the literature study we
have come to know that in news dataset SVM can be
good for starting due to its dealing with data as we
need to do some mathematical expressions so for that
purpose may be we need to use some other library
API, s so in those cases it can perform well. Neural
network produce good results but if and only if we
have large sample size and large storage space. It‟s
also intolerant with noise. Term graph is preferred
especially when we have adjacent words and our
objective to maintain the correlation between classes.
Bayesian classifier can also perform well but only in
that case where we have less data set.
 In Stance detection method we will check the
viewpoint of the reader of the news is in favor or against
or neutral. As per literature there are two ways to
represent user stance explicit and implicit. In explicit
the readers give direct expressions like thumb up or
thumb down. Implicit stances we extracted results
from social media.
 Finally we will apply fact checking that will work on
two points check worthy claims and discriminate the
veracity of claims. We will apply key basis and
viewpoints of users on that particular news. Examples of
fact checking is open web and structured knowledge
graphs.</p>
      <p>At the end we will automate our proposed combination that
can classify the text automatically and after stance detection
and fact checking we will be in the position to get results that
the news is fake or not fake.</p>
      <p>In this survey, we have covered previous efforts to the
development of a fake news system that can detect and resolve the
veracity of rumors individually. We discussed in introduction
section that fake contents after 2016 Presidential elections it
became a big issue and we also knew that the rumor veracity
value is unverifiable in the early stages and subsequently
resolved as true or false in a relatively short period of time or it
can also remain unverified for a long time. We also discussed
different detections systems which have distinct characteristics
but also commonalities with rumors so it is difficult to detect
only with the help of data driven. The approaches discussed in
this article are designed to tackle the fake news issue somehow
but it is desired that the integration can be helpful for detection
(Figure-10).</p>
      <p>Since the fake news producers seems to improve their sharing
strategies to avoid text classification and detection techniques,
fake news detection organizations are required to update their
strategies.</p>
    </sec>
    <sec id="sec-25">
      <title>Conclusion</title>
      <p>Recently after US presidential elections social media often
become a trained vehicle of spread misinformation and hoaxes.
No necessary instruments and cognitive abilities required to
assess the credibility of the other person just come and share
your opinion on social media. May be this one has no serious
consequences if only to share or spread rumors of less
important but it can be a serious problem when the consumers can
purchased products on the basis of these rumors or sometimes
serious security issues. Especially in the context of politics that
influence public opinion when individuals run small scale or
large scale organizations only to ruin someone credibility (e.g.,
Donald Trump &amp; Hillary Clinton election). In this paper we try
to cover the work that includes: knowledge based and style
based. Then further we try to explain the sub categories that
occur in these two domains e.g. Social context based,
propagation based, stance based etc. We try to taken into consideration
the effect of fake news on social platforms. We also try to
cover some context where false news generates serious issues for
the individuals involved. We have presented state of the art
block diagram that is the combination of knowledge
(Factchecking) and data (Text classification, Stance detection). As
we already discussed that the important open issue is the non
availability of a gold standard dataset and predefined
benchmark as well as collection of large amounts of fake articles
dataset. So on the basis of the points we highlighted one can
say that in big data era still the problem has not been received
the attention it deserves. But yes few approaches we discussed
in expert oriented section that have been proposed
automatically asses the fact checking and credibility assessment of news.
We can analyze fake news differently with different measure
similarities e.g we can detect whether the same news published
by other media agencies or not, We can check the location of
the news Maybe a news has a higher probability of being fake,
if it is generated somewhere else and not at the location they
deal with (e.g. Trump writes about China or Arabian States,
News about Hillary Clinton has its origin in Russia), We can
check news quality wise it is more probable that fake news do
not have mentioned their sources, simply claim something,
while for real news the source is mention and also we can
check the time of the news as whether the same news appears
in other media or sourced if it is repeated more often in the
beginning, because they are interesting, and become
recognized as fake with the time, which reduces the repetition or
they are deleted from some websites. At this stage we don‟t
have definitive solution but after detailed literature review we
can say that it‟s true that producing more reports with more
facts can be useful for helping us to make such decisions and
find technical solutions in fake news detection.</p>
      <p>Combination of machine learning and knowledge
engineering can be useful for fake news detection as it looks like that
fake news may be the most challenging area of research in
coming years.
time sequence classiication: An application to rumour stance
classiication in Twitter. In Proceedings of 54th Annual Meeting of the
Association for Computational Linguistics. Association for
Computational Linguistics, 393–398.
//techcrunch.com/2017/04/14/facebook-runs-full-page-newspaperads-against-fake-news-in-france-ahead-of-the-election.
Subhabrata Mukherjee and Gerhard Weikum. Leveraging joint
interactions for credibility analysis in news communities. In CIKM‟15.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Lewandowsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ecker</surname>
            ,
            <given-names>U. K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cook</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Beyond miformation: Understanding and coping with the “post-truth” era</article-title>
          .
          <source>Journal of Applied Research in Memory and Cognition</source>
          ,
          <volume>6</volume>
          (
          <issue>4</issue>
          ),
          <fpage>353</fpage>
          -
          <lpage>369</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Rainie</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>J. Q.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Albright</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2017</year>
          ). “
          <article-title>The future of free speech, trolls, anonymity and fake news online" Washington</article-title>
          , DC: Pew Research Center.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Scott L.</given-names>
            <surname>Althaus</surname>
          </string-name>
          &amp; David
          <string-name>
            <surname>Tewksbury</surname>
          </string-name>
          (
          <year>2000</year>
          )
          <article-title>Patterns of Internet and Traditional News Media Use in a Networked Community</article-title>
          , Political Communication,
          <volume>17</volume>
          :
          <fpage>1</fpage>
          ,
          <fpage>21</fpage>
          -
          <lpage>45</lpage>
          , DOI: 10.1080/105846000198495.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Leonhardt</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>S. A.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>"Trump's Lies"</article-title>
          . New York Times.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Himma</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , (
          <year>2017</year>
          ).
          <article-title>"Alternative facts and fake news entering journalistic content production cycle"</article-title>
          .
          <source>Cosmopolitan Civil Societies: An Interdisciplinary Journal</source>
          .
          <volume>9</volume>
          (
          <issue>2</issue>
          ):
          <fpage>25</fpage>
          -
          <lpage>41</lpage>
          . Doi:
          <volume>10</volume>
          .5130/ccs.v9i2.
          <fpage>5469</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          28,
          <year>2016</year>
          ; Image. Credit Credit ...
          <article-title>That fact, even more than the spread of fake news, can be its own sort of shell game. One that we are pulling on ourselves..Follow the new York Times</article-title>
          .https://www.nytimes.com/
          <year>2016</year>
          /11/28/opinion/fake-news
          <article-title>-andthe-internet-shell-game</article-title>
          .html
          <string-name>
            <surname>J. Soll.</surname>
          </string-name>
          , (
          <year>2016</year>
          ).
          <article-title>„The long and brutal history of fake news‟ https://www</article-title>
          .politico.com/magazine/story/2016/12/fake
          <article-title>-newshistory-long-violent-214535.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>W.</given-names>
            <surname>Ferreira</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Vlachos</surname>
          </string-name>
          , “
          <article-title>Emergent: a novel data-set for stance classification,” in Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</article-title>
          ,
          <string-name>
            <surname>ACL</surname>
          </string-name>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>Stance and sentiment in tweets</article-title>
          .
          <source>ACM Transactions on Internet Technology (TOIT)</source>
          ,
          <volume>17</volume>
          (
          <issue>3</issue>
          ):
          <fpage>26</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>Michal</given-names>
            <surname>Lukasik</surname>
          </string-name>
          , Trevor Cohn, and
          <string-name>
            <given-names>Kalina</given-names>
            <surname>Bontcheva</surname>
          </string-name>
          . 2015a.
          <article-title>Classifying tweet level judgements of rumours in social media</article-title>
          .
          <source>In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP‟15)</source>
          .
          <fpage>2590</fpage>
          -
          <lpage>2595</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>Michal</given-names>
            <surname>Lukasik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Srijith</surname>
          </string-name>
          , Duy Vu, Kalina Bontcheva, Arkaitz Zubiaga, and
          <string-name>
            <given-names>Trevor</given-names>
            <surname>Cohn</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Hawkes processes for continuous G</article-title>
          . L.
          <string-name>
            <surname>Ciampaglia</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Shiralkar</surname>
            ,
            <given-names>L. M.</given-names>
          </string-name>
          <string-name>
            <surname>Rocha</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Bollen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Menczer</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Flammini</surname>
          </string-name>
          , “
          <article-title>Computational fact checking from knowledge networks,” PlOS ONE</article-title>
          , vol.
          <volume>10</volume>
          , no.
          <issue>6</issue>
          , p.
          <fpage>e0128193</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>N.</given-names>
            <surname>Lao</surname>
          </string-name>
          and
          <string-name>
            <given-names>W. W.</given-names>
            <surname>Cohen</surname>
          </string-name>
          , “
          <article-title>Relational retrieval using a combination of path-constrained random walks,” Machine Learning</article-title>
          , vol.
          <volume>81</volume>
          , no.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          1, pp.
          <fpage>53</fpage>
          -
          <lpage>67</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>B.</given-names>
            <surname>Shi</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Weninger</surname>
          </string-name>
          , “
          <article-title>Discriminative predicate path mining for fact checking in knowledge graphs,” Knowledge-Based Systems</article-title>
          , vol.
          <volume>104</volume>
          , pp.
          <fpage>123</fpage>
          -
          <lpage>133</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Katz</surname>
          </string-name>
          , “
          <article-title>A new status index derived from sociometric analysis</article-title>
          ,
          <source>” Psychometrika</source>
          , vol.
          <volume>18</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>43</lpage>
          ,
          <year>1953</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Adamic</surname>
          </string-name>
          and E. Adar, “
          <article-title>Friends and neighbors on the web,” Social networks</article-title>
          , vol.
          <volume>25</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>211</fpage>
          -
          <lpage>230</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <given-names>D.</given-names>
            <surname>Liben-Nowell</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Kleinberg</surname>
          </string-name>
          , “
          <article-title>The link-prediction problem for social networks</article-title>
          ,
          <source>” Journal of the American society for Information Science and Technology</source>
          , vol.
          <volume>58</volume>
          , no.
          <issue>7</issue>
          , pp.
          <fpage>1019</fpage>
          -
          <lpage>1031</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Lazer</surname>
            ,
            <given-names>D</given-names>
          </string-name>
          , Mathew A Baum,
          <string-name>
            <surname>Yochai Benkler.</surname>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>The science of fake news</article-title>
          . [online] Scholar.harvard.edu. Available at: https://scholar.harvard.edu/files/mbaum/files/science_of_fake_news.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Hasher</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goldstein</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Toppino</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>1977</year>
          ).
          <article-title>Frequency and the conference of referential validity</article-title>
          .
          <source>Journal of Verbal Learning and Verbal Behavior</source>
          ,
          <volume>16</volume>
          ,
          <fpage>107</fpage>
          -
          <lpage>112</lpage>
          . doi:
          <volume>10</volume>
          .1016/S0022-
          <volume>5371</volume>
          (
          <issue>77</issue>
          )
          <fpage>80012</fpage>
          -
          <lpage>1</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Begg</surname>
            ,
            <given-names>I. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Farinacci</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>Dissociation of processes in belief: Source recollection, statement familiarity, and the illusion of truth</article-title>
          .
          <source>Journal of Experimental Psychology. General</source>
          ,
          <volume>121</volume>
          ,
          <fpage>446</fpage>
          -
          <lpage>458</lpage>
          . doi:
          <volume>10</volume>
          .1037/
          <fpage>0096</fpage>
          -
          <lpage>3445</lpage>
          .
          <year>121</year>
          .4.446.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Bacon</surname>
            ,
            <given-names>F. T.</given-names>
          </string-name>
          (
          <year>1979</year>
          ).
          <article-title>Credibility of repeated statements: Memory for trivia</article-title>
          .
          <source>Journal of Experimental Psychology. Human Learning and Memory</source>
          ,
          <volume>5</volume>
          ,
          <fpage>241</fpage>
          -
          <lpage>252</lpage>
          . doi:
          <volume>10</volume>
          .1037/
          <fpage>0278</fpage>
          -
          <lpage>7393</lpage>
          .5.3.241.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Potts</surname>
            ,
            <given-names>G. R.</given-names>
          </string-name>
          ,
          <source>St</source>
          . John,
          <string-name>
            <given-names>M. F.</given-names>
            , &amp;
            <surname>Kirson</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          (
          <year>1989</year>
          ).
          <article-title>Incorporating new information into existing world knowledge</article-title>
          .
          <source>Cognitive Psychology</source>
          ,
          <volume>21</volume>
          ,
          <fpage>303</fpage>
          -
          <lpage>333</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0010</fpage>
          -
          <lpage>0285</lpage>
          (
          <issue>89</issue>
          )
          <fpage>90011</fpage>
          -
          <lpage>X</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Zhai</surname>
          </string-name>
          et al (
          <year>2005</year>
          )
          <article-title>Tracking News Stories across Different Sources</article-title>
          , MM‟
          <fpage>05</fpage>
          ,
          <string-name>
            <surname>November</surname>
          </string-name>
          6-
          <issue>11</issue>
          ,
          <year>2005</year>
          , Singapore.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <given-names>H.</given-names>
            <surname>Allcott</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Gentzkow</surname>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Social media and fake news in the 2016 election</article-title>
          . National Bureau of Economic Research. URL http://www.nber.org/ papers/w23089.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Mosseri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Working to stop misinformation and false news</article-title>
          .
          <source>Newsroom. fb. com.</source>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Wohlsen</surname>
          </string-name>
          .
          <article-title>Stop the lies: Facebook will soon let you flag hoax news stories</article-title>
          , May
          <year>2015</year>
          . URL https://www.wired.com/
          <year>2015</year>
          /01/facebookwants-stop
          <article-title>-lies-letting-users-flag-news-hoaxes/ Stanford History Education Group</article-title>
          .
          <article-title>Evaluating information: The cornerstone of civic online reasoning</article-title>
          ,
          <source>Nov</source>
          .
          <year>2016</year>
          . URL https://stacks.stanford.edu/file/druid:fv751yt5934/
          <article-title>SHEG%20Evaluat ing%20Information%20Online</article-title>
          .pdf
          <string-name>
            <given-names>J.</given-names>
            <surname>Guynn</surname>
          </string-name>
          .
          <article-title>Facebook begins flagging ‟disputed‟ (fake) news</article-title>
          , Mar.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          2017.URL https://www.usatoday.com/story/tech/news/2017/03/06/ facebook-begins
          <article-title>-flagging-disputed-fake-news/98804948.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <given-names>R.</given-names>
            <surname>Dillet</surname>
          </string-name>
          .
          <article-title>Facebook runs full page newspaper ads against fake news in france ahead of the election</article-title>
          ,
          <source>Apr</source>
          .
          <year>2017</year>
          . URL https: D.
          <string-name>
            <surname>Lazer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Baum</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Grinberg</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Friedland</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Joseph</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          <string-name>
            <surname>Hobbs</surname>
            , and
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Mattsson</surname>
          </string-name>
          .
          <article-title>Combating fake news: An agenda for research and action</article-title>
          .
          <source>May</source>
          <year>2017</year>
          . URL https://shorensteincenter.org/combatingfake-news
          <string-name>
            <surname>-</surname>
          </string-name>
          agenda
          <string-name>
            <surname>-</surname>
            for-research/ F. Da Silva and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Englind</surname>
          </string-name>
          .
          <article-title>Troll detection: A comparative study in detecting troll farms on twitter using cluster analysis</article-title>
          .
          <source>DD151X Examensarbete i Datateknik</source>
          , grundniv˚ a,
          <year>2016</year>
          . URL http://www.divaportal.org/smash/ get/diva2:927209/FULLTEXT02 G. Nygren and
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>H ok</article-title>
          .
          <article-title>Ukraine and the information war - journalism between ideal and self-esteem</article-title>
          .
          <source>The Federal Agency for Protection and Preparedness</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>W.Y.</given-names>
          </string-name>
          ,
          <year>2017</year>
          .
          <article-title>" Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection</article-title>
          .
          <source>In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>2</volume>
          :
          <string-name>
            <surname>Short</surname>
            <given-names>Papers)</given-names>
          </string-name>
          (Vol.
          <volume>2</volume>
          , pp.
          <fpage>422</fpage>
          -
          <lpage>426</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>People</surname>
          </string-name>
          and Responsibilities.
          <source>Reich Conference</source>
          <year>2017</year>
          ,
          <year>2017</year>
          . URL https://www.youtube.com/ watch? V = h7 fTVDYzNM.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Vlachos</surname>
          </string-name>
          and
          <string-name>
            <given-names>Sebastian</given-names>
            <surname>Riedel</surname>
          </string-name>
          .
          <article-title>Fact checking: Task definition and dataset construction</article-title>
          .
          <source>ACL‟14.</source>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <given-names>John</given-names>
            <surname>Houvardas</surname>
          </string-name>
          and
          <string-name>
            <given-names>Efstathios</given-names>
            <surname>Stamatatos</surname>
          </string-name>
          .
          <article-title>N-gram feature selection for authorship identification</article-title>
          . Artifi- cial
          <string-name>
            <surname>Intelligence</surname>
          </string-name>
          : Methodology,
          <string-name>
            <surname>Systems</surname>
          </string-name>
          , and Applica- tions, pages
          <fpage>77</fpage>
          -
          <lpage>86</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <given-names>Michele</given-names>
            <surname>Banko</surname>
          </string-name>
          , Michael J Cafarella, Stephen Soder- land, Matthew Broadhead, and
          <string-name>
            <given-names>Oren</given-names>
            <surname>Etzioni</surname>
          </string-name>
          .
          <article-title>Open information extraction from the web</article-title>
          .
          <source>In IJCAI‟07.</source>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <string-name>
            <given-names>Amr</given-names>
            <surname>Magdy</surname>
          </string-name>
          and
          <string-name>
            <given-names>Nayer</given-names>
            <surname>Wanas</surname>
          </string-name>
          .
          <article-title>Web-based statistical fact checking of textual documents</article-title>
          .
          <source>In Proceedings of the 2nd international workshop on Search and mining user-generated contents</source>
          , pages
          <fpage>103</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <article-title>Stance and sentiment in tweets</article-title>
          .
          <source>ACM Transactions on Internet Technology (TOIT)</source>
          ,
          <volume>17</volume>
          (
          <issue>3</issue>
          ):
          <fpage>26</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <given-names>Vahed</given-names>
            <surname>Qazvinian</surname>
          </string-name>
          , Emily Rosengren,
          <string-name>
            <surname>Dragomir R Radev</surname>
            , and
            <given-names>Qiaozhu</given-names>
          </string-name>
          <string-name>
            <surname>Mei</surname>
          </string-name>
          .
          <article-title>Rumor has it: Identifying misinformation in microblogs</article-title>
          .
          <source>In EMNLP‟11.</source>
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <string-name>
            <given-names>Zhiwei</given-names>
            <surname>Jin</surname>
          </string-name>
          , Juan Cao, Yongdong Zhang, and
          <string-name>
            <given-names>Jiebo</given-names>
            <surname>Luo</surname>
          </string-name>
          .
          <article-title>News verification by exploiting conflicting social viewpoints in microblogs</article-title>
          . In AAAI‟
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <given-names>Zhiwei</given-names>
            <surname>Jin</surname>
          </string-name>
          , Juan Cao,
          <string-name>
            <surname>Yu-Gang Jiang</surname>
          </string-name>
          , and Yongdong Zhang.
          <article-title>News credibility evaluation on microblog with a hierarchical propagation model</article-title>
          . In ICDM‟
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <string-name>
            <given-names>Manish</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Peixiang</given-names>
            <surname>Zhao</surname>
          </string-name>
          , and Jiawei Han.
          <article-title>Eval- uating event credibility on twitter</article-title>
          .
          <source>In PSDM‟12.</source>
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <article-title>Clickbait detection</article-title>
          .
          <source>In European Conference on Information Retrieval</source>
          , pages
          <fpage>810</fpage>
          -
          <lpage>817</lpage>
          . Springer,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <string-name>
            <surname>Fan</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Classifying Fake News</article-title>
          . conniefan.com.
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <surname>Ruchansky</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seo</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2017</year>
          , November).
          <article-title>Csi: A hybrid deep model for fake news detection</article-title>
          .
          <source>In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management</source>
          (pp.
          <fpage>797</fpage>
          -
          <lpage>806</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <surname>Rubin</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Conroy</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cornwell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Fake news or truth? using satirical cues to detect potentially misleading news</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <source>In Proceedings of the Second Workshop on Computational Approaches</source>
          to Deception Detection (pp.
          <fpage>7</fpage>
          -
          <lpage>17</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>Bourgonje</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Rehm</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>From clickbait to fake news detection: an approach based on detecting the stance of headlines to articles</article-title>
          .
          <source>In Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism</source>
          (pp.
          <fpage>84</fpage>
          -
          <lpage>89</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <string-name>
            <surname>Bajaj</surname>
            ,
            <given-names>S</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>“The Pope Has a New Baby!” Fake News Detection Using Deep Learning</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>Tresh</surname>
          </string-name>
          . M &amp;
          <string-name>
            <surname>Luniewski</surname>
          </string-name>
          .
          <source>A. ACM</source>
          <year>1995</year>
          ,
          <article-title>In Proceed-ings of the fourth international conference on Infor-mation and knowledge management (pp</article-title>
          . pp.
          <fpage>226</fpage>
          -
          <lpage>233</lpage>
          ) Rubin,
          <string-name>
            <given-names>V. L.</given-names>
            ,
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            , &amp;
            <surname>Conroy</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. J.</surname>
          </string-name>
          (
          <year>2015</year>
          , November).
          <article-title>Deception detection for news: three types of fakes</article-title>
          .
          <source>In Proceedings of the 78th ASIS&amp;T Annual Meeting: Information Science with Impact: Research in and for the Community</source>
          (p.
          <fpage>83</fpage>
          ). American Society for Information Science.
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          (
          <year>2015</year>
          , May).
          <article-title>Real-Time News Cer tification System on Sina Weibo</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <source>In Proceedings of the 24th International Conference on World Wide Web</source>
          (pp.
          <fpage>983</fpage>
          -
          <lpage>988</lpage>
          ). ACM Guha,
          <string-name>
            <surname>S.</surname>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Related Fact Checks: a tool for combating fake news</article-title>
          .
          <source>arXiv preprint arXiv:1711</source>
          .
          <fpage>00715</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <string-name>
            <given-names>Yaliang</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Jing</given-names>
            <surname>Gao</surname>
          </string-name>
          , Chuishi Meng,
          <string-name>
            <given-names>Qi</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Lu</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Bo</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Wei</given-names>
            <surname>Fan</surname>
          </string-name>
          , and Jiawei Han.
          <article-title>A survey on truth discovery</article-title>
          .
          <source>ACM Sigkdd Explorations Newsletter</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          <string-name>
            <surname>Tschiatschek</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Singla</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomez</surname>
            <given-names>Rodriguez</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Merchant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            , &amp;
            <surname>Krause</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          (
          <year>2018</year>
          , April).
          <article-title>Fake News Detection in Social Networks via Crowd Signals</article-title>
          .
          <source>In Companion of the The Web Conference 2018 on The Web Conference</source>
          <year>2018</year>
          (pp.
          <fpage>517</fpage>
          -
          <lpage>524</lpage>
          ).
          <source>International World Wide Web Conferences Steering Committee.</source>
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <surname>Highet</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1972</year>
          ).
          <source>The Anatomy of Satire</source>
          . Princeton, N.J: Princeton University Press.
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <string-name>
            <given-names>Arkaitz</given-names>
            <surname>Zubiaga</surname>
          </string-name>
          , Ahmet Aker, Kalina Bontcheva, Maria Liakata, and
          <string-name>
            <given-names>Rob</given-names>
            <surname>Procter</surname>
          </string-name>
          .
          <article-title>Detection and resolution of rumours in social media: A survey</article-title>
          .
          <source>arXiv preprint arXiv:1704.00656</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          <string-name>
            <given-names>Martin</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <article-title>Sebastian K¨opsel</article-title>
          , Benno Stein, and
          <string-name>
            <given-names>Matthias</given-names>
            <surname>Hagen</surname>
          </string-name>
          .
          <article-title>Clickbait detection</article-title>
          .
          <source>In European Conference on Information Retrieval</source>
          , pages
          <fpage>810</fpage>
          -
          <lpage>817</lpage>
          . Springer,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <string-name>
            <surname>Siponen</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stucke</surname>
            <given-names>C</given-names>
          </string-name>
          (
          <year>2006</year>
          )
          <article-title>Effective anti-spam strategies in companies: an international study</article-title>
          .
          <source>In: Proceedings of HICSS ‟06 6.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>