=Paper=
{{Paper
|id=Vol-2350/paper10
|storemode=property
|title=Combining Machine Learning with Knowledge Engineering to detect Fake News in Social Networks - A Survey
|pdfUrl=https://ceur-ws.org/Vol-2350/paper10.pdf
|volume=Vol-2350
|authors=Knut Hinkelmann,Sajjad Ahmed,Flavio Corradini
|dblpUrl=https://dblp.org/rec/conf/aaaiss/HinkelmannAC19
}}
==Combining Machine Learning with Knowledge Engineering to detect Fake News in Social Networks - A Survey==
Combining Machine Learning with Knowledge Engineering to detect
Fake News in Social Networks-a survey
Sajjad Ahmed1, Knut Hinkelmann2, Flavio Corradini1
1
Department of Computer Science, University of Camerino, Italy
2
FHNW University of Applied Sciences and Arts Northwestern Switzerland
Riggenbachstrasse 16, 4600 Olten, Switzerland
ahmed.sajjad@unicam.it; knut.hinkelmann@fhnw.ch; flavio.corradini@unicam.it
Abstract
(Lewandowsky 2017).Various motivations are observed for
Due to extensive spread of fake news on social and news media it
spreading fake news and generating these types of information
became an emerging research topic now a days that gained attention.
In the news media and social media the information is spread high- on social media channels. Some of them are to gain political
speed but without accuracy and hence detection mechanism should be gains or ruin someone else's reputation or for seeking atten-
able to predict news fast enough to tackle the dissemination of fake tion. Fakenews is a type of yellow journalism or propaganda
news. It has the potential for negative impacts on individuals and that consists of deliberate misinformation or hoaxes spread via
society. Therefore, detecting fake news on social media is important traditional print and broadcast news media or online social
and also a technically challenging problem these days. We knew that media.
Machine learning is helpful for building Artificial intelligence sys- The importance of fake news can easily be understood as per
tems based on tacit knowledge because it can help us to solve com- the report published by PEW Research Centre (Rainie et al.
plex problems due to real word data. On the other side we knew that
2016). The statistics shows that 38% of adults often get news
Knowledge engineering is helpful for representing expert‟s know-
ledge which people aware of that knowledge. Due to this we proposed online, 28% rely on website/apps & 18% rely on social media.
that integration of Machine learning and knowledge engineering can Overall 64% of adults feel that fake news causes a great deal of
be helpful in detection of fake news. In this paper we present what is confusions. The importance of fake news can also be judged
fake news, importance of fake news, overall impact of fake news on through below diagram shows dramatically fake news gained
different areas, different ways to detect fake news on social media, worldwide popularity in 2016 after US presidential elections.
existing detections algorithms that can help us to overcome the issue,
similar application areas and at the end we proposed combination of
data driven and engineered knowledge to combat fake news. We stu-
died and compared three different modules text classifiers, stance
detection applications & fact checking existing techniques that can
help to detect fake news. Furthermore, we investigated the impact of
fake news on society. Experimental evaluation of publically available
datasets and our proposed fake news detection combination can serve
better in detection of fake news.
Introduction
Fake news and the spread of misinformation have dominated
the news cycle after US Presidential elections in 2016. Some
reports show that Russia has created millions of fake accounts
and social bots to spread false stories during the electionsF
Figure 1: Last five years Google trend
Copyright held by the author(s). In A. Martin, K. Hinkelmann, A. Ger- The wide-ranging spread of fake news can have a negative
ber, D. Lenat, F. van Harmelen, P. Clark (Eds.), Proceedings of the impact on society and individuals. Fake news intentionally
AAAI 2019 Spring Symposium on Combining Machine Learning with persuades clients to accept biased or false beliefs. Fake news
Knowledge Engineering (AAAI-MAKE 2019). Stanford University,
Palo Alto, California, USA, March 25-27, 2019.
changes the way people interpret and respond to real news. mention and also we can check the time of the news as
For example, some fake news was just created to trigger whether the same news appears in other media or sourced if
people‟s distrust and make them confused, impeding their it is repeated more often in the beginning, because they are
abilities to differentiate what is true and what is not true (Scott interesting, and become recognized as fake with the time,
et al., 2000; Leonhard et al., 2017; Himma 2017). It is important which reduces the repetition or they are deleted from some
to understand that fake and deceptive news have existed for a websites. At this stage we don‟t have definitive solution but
long time. It‟s been part of the conversation as far as the birth after detailed literature review we can say that it‟s true that
of the free press (Soll 2016). There are various approaches for producing more reports with more facts can be useful for
automated fake news detection: Text Classification, Stance helping us to make such decisions and find technical solu-
Detection, Metadata & Fact Checking. tions in fake news detection.
Data Driven: Knowledge Engineering:
Text classification: They mainly focus on extracting various Fact Checking techniques mainly focus on to check the
features of text and after that incorporating of those fea- fact of the news on the basis of the known facts. There
tures into classification models e.g. Decision tree, SVM, are three types of fact checking techniques available
logistic regression, K nearest neighbor. At the end selec- Knowledge Linker (Ciampaglia et al. 2015), PRA (Lao et
tion of best algorithm that performs well (Nidhi et al. 2011). al. 2011), and PredPath (Shi et al. 2016). Then the Predic-
Emergent1 is a real time data driven rumor identification tions algorithms that using knowledge to check the fact
approach. It works automatically to track rumors asso- are Degree Product (Shi et al. 2016), L. Katz (1953),
ciated with social media but those rumors where human Adamic & Adar (Adamic et al. 2003) and Jaccard coeffi-
input require has not been automated. The problem is that cient (Liben et al. 2016). Some fact checking organiza-
most classification approaches are supervised so we need tions providing online fact checking services e.g.
prior dataset to train our model but as we discussed that Snopes3, PolitiFact4 & Fiskkit5. Hoaxy6 is another plate
obtaining a reliable fake news dataset is very time con- form for fact checking. Collection, detection and analysis
suming process. and to check online misinformation is part of Hoaxy. The
Stance detection: False news has become an important task criteria they followed is to check the news is fake or not
after the US 2016 Presidential elections. Governments, fake simply they refer it to the domain experts, individu-
Newspapers and social media organizations are working als or organizations on that particular topic. They also
hard to separate the fake contents and credible. So the first followed non partisan information and data sources (e.g.,
step in identification phase is to understand that what oth- peer-reviewed journals, government agencies or statis-
ers are saying about the same topic (Ferreira et al. 2016). tics).
So far as fake news challenge initially to focus on stance
detection. In stance detection we check the estimation of Discussion
relativity of two different text pieces on the same topic and Our main research question that how would one distinguishes
stance of others (Saif et al. 2017). PHEME2 was a three between fake and non fake news articles using data driven and
years research project funded by the European Commis- knowledge engineering. The facts show that the fake news
sion from 2014-2017, studying natural language phenomenon is an important issue that requires scholarly atten-
processing techniques for dealing rumor detection, stance tion to determine how fake news diffused. Different groups
classification (Lukasik et al. 2015 ; Zubiaga et al. 2016), introduced different models some of them applied data
contradiction detection and analysis of social media ru- oriented and some applied only knowledge side. The important
mors. Existing stance detection approaches based on em- point is the speed of spreading of these types of information on
bedding features on individual posts to predict stance of social media networks is challenging problem that require at-
that particular content. tention and alternative solution. If news is detected fake the
Meta-data: We can analyze fake news differently with dif- existing techniques blocked them immediately due to their
ferent measure similarities e.g. Location, Time, author and functionally as we can‟t replace them but if a news detected
Quality. we can detect whether the same news published by fake at least we need some experts opinion or verification be-
other media agencies or not, We can check the location of fore blocking that particular news. This thing helps to rise the
the news Maybe a news has a higher probability of being third party fact checking organizations to come and solve the
fake, if it is generated somewhere else and not at the loca- issue but that is also time consuming process. We need some
tion they deal with (e.g. Trump writes about China or Ara-
bian States, News about Hillary Clinton has its origin in
Russia), We can check news quality wise it is more proba- 3
www.snopes.com
ble that fake news do not have mentioned their sources,
4
simply claim something, while for real news the source is www.politifact.com
5
www.fiskkit.com
1
www.emergent.info
2 6
www.pheme.eu https://hoaxy.iuni.iu.edu/
application that check the news whether it is fake or not at the the participants who had read the false news or stories conse-
same place. cutively five weeks believe false stories more truthful and
The existing fake news systems based on the predictive models more plausible as compare to the participants who had not
that simply classify that the news is fake or not fake. Some been exposed (Hasher et al. 1977).
models used source reliability and network structure so the big News can be true if the information it expresses that is more
challenge in those cases is to train the model due to non avail- familiar. Familiarity means automatic consequences of expo-
ability of corpora it is impossible. sure so it will influence on truth and that is fully unintentional.
Due to surge of fake news problem and overcoming the chal- In those cases where the source or the agency that circulated
lenges discussed above a volunteer based organization Fake- stories warns that source may not be credible, people did not
NewChallenge7 that contains 70 teams which organizes specif- stop to believe on that story due to the familiarity (Begg et al.
ically machine learning competitions to the detection of fake 1992). Another study that contains half statements showing in
news problem. the experiments were true and half were false but the results
At the end we can say that there is a need of an alternative ap- shows that the participants like repeated statements although
plication that combine knowledge with data and automation of they were false but due to the familiarity they rated as more
fact checking is required which looks content of the news true than the stories they heard first time (Bacon et al. 1979).
deeply with expert opinion at the same place to detect the fake Monitoring of source is itself an ability to check and identify
news. the news origin we read. Some studies clearly indicate that the
The rest of this paper is divided into four sections. Section 2 participants use familiarity to understand the source of their
contains background, impact on society, News content models, memory. Another study that proposed general knowledge and
related work and similar application areas; Section 3 describes semantic memory does not focus on conditions but it only
Methodology, Proposed combination approach and publically helps a person when and where he learned this information.
available data set we used for initial classification. Our conclu- Similarly a person may have some knowledge about an event
sions and future directions are presented in Section 4. but not remember the event so it comes from memory (Potts et
al. 1989).
Literature Review
Overall Impact on different areas
In this section we try to cover all the topics that are related to News is a real time situation and a comprehensive story that
our topic and can be helpful for better understanding of fake covers different issues like criminology, health, sports, politics,
news detection. At the beginning we discuss that the trust level business etc. Local news agencies mostly focus on the specific
of the readers on online news media. Then we discuss the im- regional issues and international news agencies covers both
pact of fake news on society and then different types of news local and global news. Finding a particular story on the basis of
models. Then we discuss related work and similar applications reader‟s choice is an important task. Different methods pro-
areas where some of the researchers applied data driven and posed in this study that how can we overcome the issue and
some applied knowledge side to overcome the specific prob- follow the reader‟s choice (Zhai et al. 2005). Hot topic detection
lem in that particular domain. in a local area during a particular period based on micro blogs
containing difference in words but pointing towards same topic
Background using twitter and Wikipedia.
Internet gave opportunity to everyone enter online news busi-
ness because many of them were already rejected the tradition- Business
al news sources that had gained high level of public trust and In online news media the services and total number of users
also credibility of the work. According to a survey general are important to gain more business. Some big names who are
trust on mass media collapsed as lowest in the history of this earning a lot due to high number of users and circulation of
business. Especially in political right 51% democrats and 14% fake news are Facebook, Twitter, Google and Search engines
republican in USA expressing a great deal and trust in mass also fake news producers and consumers. Fake news growing
media as news source (Lazer et al. 2018). dramatically day by day and its impact on society is very bad.
It has come to known that the information repeated again is
more likely to be rated true than the information that has not Social Networks
been heard before. Familiarity with false news would increase After US Presidential elections social media facing pressure
with truthfulness. Further this thing did not stop here as the from general public and civil society to decrees fake news on
false stories would result to create the false memory. The au- their platforms. It‟s a very difficult task to combat fake news
thors first observed “illusory-truth effect” and gave the results and especially when no proper check and balance and sharing
that subject rated repeated statements truer as compare to the policies are available. Articles that go viral on social media can
new statements. They present a case study with the results that draw significant revenue through advertising when users click
and directly redirected on that page. But the question is how
7
we can measure the importance of social media networks for
http://www.fakenewschallenge.org/
fake news suppliers so one possibility to measure through the Troll farms and according to one study that they have potential
source of their web traffic. Every time when a user visits the algorithm to track down in Twitter (Nygren et al. 2016).
web page that user has navigated directly through server or it
referred to some other site (Allcott et al. 2017). Security Agencies
One focused area that really helpful to detect fake articles on Misinformation or propaganda has always been used to affect
Facebook is the fact checking organizations. According to the people and create fear for opponent. We can categorize it in
Facebook that they are taking all steps to overcome the issue three types. White propaganda is that where we knew the in-
on their platform and make it as much as difficult as possible itiator and the news circulated by that particular person or
to buy ads on platform for people who really want to share group is true. Black propaganda is that where we don‟t know
fake contents. Better identifying false news with the help of the source and also the news shared by that person or groups is
community and third party fact checking organizations and totally false.
some stance detections mechanisms is possible because they Grey type is that which is between the white and black. During
can limit the spread speed of fake contents and they can make the cold world war the objective of these types of activities is
it uneconomical (Mosseri 2017). to sway the opinions just to hide and distorted facts from hid-
Single users have the same facility that they will get a message den senders. A big example of this type of propaganda hap-
that some people do not agree with the article content. The pened from 2002 to 2008 when United States Military De-
regular users are not in a position to judge the validity of the partment recruited approximate seventy five pensioned officers
links they can see. So this thing might be unreliable for Face- just to propagate on media on Iraq‟s possible ownership of
book flagging functions (Wohlsen 2015). weapons. The objective of this activity is to weaken the public
The second focused area is that a flag that is available with the of the opponent who supports them and strengthen the own
fake news article. Simply users can click upper right corner of support. The work had done through different sources e.g. ra-
that post. The more times that particular post flagged by the dio, newspapers and TV channels that hide the connections
users that it is false then less often it will show up in news feed (Nygren et al. 2016).
tab. According to the Facebook policy that they would not When we compare this with the earlier variants of propaganda
delete that flagged post but they end up with disclaimer with due to society needs because today it is possible for everyone
the statement “Many people on Facebook have reported that to reach large audience within seconds but in past it was not
this story contains false‟‟ Stanford History Education Group possible. So it means we are more reliant on information that
(2016). affects more. Some other actors also involved in this campaign
Due to the sensitivity of the issue Facebook sends a flagged and they can easily affect the facts, those are diplomatic per-
post to the third party who is responsible to check the fact sons, military economic state actors and public relation de-
about that post. If fact checking organizations marked it dis- partments. An independent body can easily control these types
puted then automatically users will see a banner under the ar- of activities as compare to the state controls everything. A big
ticle if it appears in user news feed area. That banner will example of this disinformation is Ukraine‟s crisis 2014, where
clearly explain the situation that third party organization dis- a state invaded another country territory and misleads about it.
puted it and a link is available. Another thing is that those dis- Due to this it affects badly the world‟s response. We knew that
puted stories pushed down in news feed and a message will this is not only one thing that spreading lies but also other ac-
appear before sharing from any user that if they sure about it tivities that linked to it are involved. In next section we dis-
then they can share it (Guynn 2017). Relying on users is not a cussed different types of news content models one by one with
permanent and good solution, but the idea is just to educate the examples.
users and if they consent on it then they can share it. If every
user take care about this then fake news would not be as big News Content Models
problem.
In Content modeling we identify our requirements, develop
To check the level of truth in articles is very difficult as they
taxonomy (Classification system) that meets those require-
differ in some points but in a very professional manner. That‟s
ments and consider where metadata should be allowed or re-
why only the best way is to understand that the management of
quired.
Facebook that they need to educate their users about sharing
policy. Every user need to understand that before sharing any
information on Facebook they must be sure about it (Dillet
2017). Facebook management and responsible persons claims
that they have an algorithm that helping by rooting out fake
articles. The algorithm shows the users about that article before
sharing, source, date, topic and number of interactions.
When we compare with Twitter, fake news shared by the real
account holders to some small websites and highly active „cy-
borg‟ users (Silva et al. 2016). They are very professional and
sometimes these professional groups evolved to be industria-
lized by states and terrorist organizations. These groups called Figure 2: News Content Models
News content models can be categories in knowledge based the users, measure response and text. Then the second compo-
and style based but due to enhancement in social media it pro- nent score estimates source for every user and then combined
vides additional resources to the researchers to supplement and with the first module (Ruchansky et al. 2017). At the end the
enhance news content models like Social Context Models, proposed model allows CSI to output prediction separately
Stance Based & Propagation Based. The main focus of news shown in figure-3.
content modeling is on news content features and especially
factual sources to detection of fake and real text (Wang 2017).
In next section we discussed news content models and existing
applications comes under their domain with examples.
Knowledge-based:
The objective of Knowledge based approach is to use external
sources to fact check in news content and the goal of fact
checking is assign a truth value to a claim in particular (Vlachos
et al. 2014). When we read literature it has come to our know-
ledge that fact checking in fake news detection area gained
high attention. That‟s the reason many efforts have been made Figure 3: Illustration of proposed CSI model
to develop some feasible automated fact checking systems.
Since fake news attempts to spread false news contents on so- Crowd sourcing Oriented
cial media networks and also news media, so straightforward In crowd sourcing approach it gives option to the users to dis-
means detection of those false claims and check the truthful- cuss and annotate the accuracy of specific news. So in other
ness of those news. We can categorize existing fact checking words we can say it‟s fully rely on the wisdom of crowd to
applications in three parts expert oriented, crowd sourcing enable fact checking on the basis of their knowledge. Fiskkit10
oriented and computational oriented. is a big example of this type of fact checking as it provides
facility to the users to discuss and annotate the accuracy of
Expert Oriented news article. Another anti fake news detection application that
We need highly domain experts in expert oriented fact check- provide facility to detect fake articles and further it gives facili-
ing that can investigate data and documents to verdict the ty to users to report suspicious news contents so that the edi-
claims. The famous fact checking applications are Snopes8 & tors will check it further. After taking motivations from Face-
PolitiFact9. Expert oriented fact checking is very demanding book flag method with the involvement of public and leverag-
but it‟s also time consuming process. As soon as they receive ing crowd signals for detecting fake contents (Potthast et al.
new claim they consult domain experts, journals or statistical 2016). An algorithm named Detective was developed as it
analysis already available in that particular domain. It took so checks run time flagging accuracy with Bayesian inference
much time so we need to develop a new classification ap- method. This algorithm selects small subsets of news everyday
proach that can help to detect fake news in a better way and and send back to the expert and on the basis of expert response
timely. it stops that fake news.
New fact check mechanisms that can help readers after critical-
ly evaluate the news before judgment by using fact checking. Computational Oriented
The objective of this work is not to provide results that the Computational fact checking aims to provide users an automat-
content is fake or not fake instead of provide mechanism for ic system that can classify true and false contents. Mostly
critically evaluation during news reading process. Reader starts computational fact checking works on two points that identify
reading of the news and fact check technique will provide fa- check worthy claims and then discriminate the veracity of fact
cility to the reader that at the same time read all related or claims.
linked stories just for critical evaluation. They used scoring It works on the key basis and viewpoints of users on the specif-
measure formula that displays the related stories of the scoring ic content (Houvardas et al. 2006). Open web and structured
measure threshold but if the scoring measure below the thre- knowledge graphs are the big examples of these types of com-
shold it will not display on corresponding fact check page (Gu- putational oriented fact checking. Open web sources are uti-
ha 2017). lized as referenced that can differentiate the news true and
Three generally agreed upon characteristics of fake news: Text false (Banko et al., 2007; Magdy et al., 2010).
of an article, user response and the source that needs to be in- Separation of fake contents in three categories: serious fabrica-
corporate at one place and after that they proposed a hybrid tion, large scale hoaxes and humorous fake was the main ob-
model. First module captures the abstract temporal behavior of jective of this work. They provide a way to filter, vet and veri-
8
www.snopes.com
9 10
www.politifact.com www.fiskkit.com
fying the news and discussed in details the pros and cons of fake news frames with filtering due to its potential of mislead
those news (Rubin, V et al., 2015). to the readers (Rubin et al. 2016).
This study is data oriented application simply they used avail-
able dataset and then applied deep learning method and finally
they proposed a new text classifier that can predict whether the
news is fake or not (Bajaj 2017). Dataset used for this project
was drawn from two different publically accessible web-
sites1112
Traditionally all rumor detection techniques based on message
level detection and analyzed the credibility on the basis of data
but in real time detection based on the keywords then the sys-
Figure 5: Fake News: Satire Detection Process
tem will gather related micro blogs with the help of data acqui-
sition system which solves this problem.
The proposed model combines user based, propagation based Social Context Models:
and content based models and check real time credibility and Social media provides additional resources to the researchers
sends back the response within thirty five seconds (Zhou et al. to supplement and enhance news context models. Social mod-
2015). els engagements in the analysis process and capturing the in-
formation in different forms from a variety of perspectives.
When we check the existing approaches we can categories
social modeling context in stance based and propagation based.
One important point that we need to highlight here that only a
few existing social context models approaches utilized for fake
news detection. So we will try with the help of literature those
social context models that used for rumor detection. Proper
assessment of fake news stories shared on social media plat-
forms and identification of fake contents automatically with
the help of information sources and social judgment on the
basis of Facebook data is the main point of this work. During
2016 US President Elections they examines that machine
learning classifiers can be helpful to detect fake news (Tresh et
al. 1995).
Figure 4: Framework of Real time Rumor Detection
Style-based: Stance-based
In style based approach fake news publishers used some spe- It is a process that can determine the results from news that the
cific writing style necessary to appeal a wide scope that is not reader is in favor or against or neutral of that particular news
available in true news article. The purpose of this activity is to (Saif et al. 2017). There are two ways to represent the user
mislead or distorted or influence large population. stances explicitly or implicitly. Explicit stances are those
Categorization of News sources into two categories: writing stances where the readers gave direct expressions like thumb
quality and strong sentiment is the main point as real news up or thumb down. Implicit stances are those stances where
sources have higher writing quality (taking into account: Miss- results extracted from social media posts. Overall we can say
pelled Words, Punctuation & sentences length) as compare to that stance detection is a process where automatically deter-
the fake news articles that are likely to be written by unprofes- mining from user posts that‟s the majority of users or in favor
sional writers. On the other side real news sources are appear or against (Qazvinian et al., 2011; Jin et al., 2016) proposed a
unbiased or neutral words, describing events with facts. So the model to check the viewpoint of users and then on the basis of
development of classifier and compare it with other classifica- viewpoint to learn the credibility of posts. (Tecchini et al.
tion methods is the main focus area for fake content identifica- 2016) proposed a bipartite network of users on Facebook posts
tion (Fan et al. C 2017). using „like‟ stance. On the basis of the results we can predict
It is hard to pin down satire in the scholarly literature (Nidhi et likelihood of Facebook users.
al. 2011). Another study that proposed a method that can first Stance detection of headlines based on n-gram matching for
translated the theories of humor, irony and satire into a predic- binary classification “related” vs. “unrelated” pairs. This ap-
tive method for satire detection. Conceptual contributions of proach can be applied detection of fake news especially click-
this work are to link satire, irony and humor. Then target the bait detection. They used dataset released by the organization
Fake News Challenge (FNC1) on stance detection for experi-
11 ments (Bourgonje et al. 2017). The dataset is publically available
www.kaggle.com
and can be downloaded from the corresponding GitHub page
12
https://research.signalmedia.co/newsir16/signal-dataset.html
along with base line implementation. Key points of the dataset Rumor Detection
can be seen in the below figure-4. Objective of Rumor detection is to classify a piece of informa-
tion as rumor or non rumor. Four steps are involved model
Detection, Tracking, Stance & Veracity that can help to detect
the rumors. These posts considered the important sensors for
determining the authenticity of rumor. Rumor detection can
further categories in four subtasks stance classification, veraci-
ty classification, rumor tracking, rumor classification (Arkaitz et
al. 2017). So still few points that require more details to under-
stand the problem and also we can learn from the results that is
it actually rumor or not and if its rumor then how much it is.
So for these questions we believe that combination of data and
knowledge side is required to explore those areas that still un-
Figure 6: Key Points of the FNC1 dataset explainable.
Propagation-based Clickbait Detection
In propagation based approach homogeneous and heterogene- Attract visitor‟s attention and encourage them clicking on a
ous credibility networks built for propagation. Homogeneous particular link is the main objective in clickbait business. Ex-
propagation that contains single entities like post or event but isting Clickbait approaches utilize various extraction features
heterogeneous credibility network contains multiple entities from teaser messages, linked WebPages, tweets Meta informa-
like posts, events and sub events (Jin et al 2016; Zhiwei et al tion (Martin et al. 2016). So in same case we can notify the
2014; Gupta et al 2012). In propagation based approach we readers before reading any kind of news that it could be fake
check the interrelation of relevant events on social media posts due to some specific indications so the readers need to be more
to detect the fake news and the credibility of that news. Anoth- careful.
er study that helps to build three layer network after including
sub events then we can check the credibility of news with the Email Spam Detection
help of graph optimization framework (Jin et al. 2014). Propa- Spam detection in email is one of the major problem that
gation based algorithm for users encoding that can check the bringing financial damage to the companies and also annoying
credibility and tweets together (Gupta et al. 2014) individual users. Different groups are working with different
approaches to detect spam in email and different machine
learning approaches are very helpful for spam filtering.
Similar Application Areas
In this section we will discuss similar application areas to the
problem of fake news detection. Some applications used data
side and some are related to the knowledge side. They perform
good results in specific domain but they require high efforts
during development so the combination with knowledge engi-
neering it can be helpful to reduce the efforts. At the end we
discussed some other data driven applications (table-1) and
few where the combination of data driven and knowledge ex-
ists (table-2). Figure 7: Spam Filtering
Truth Discovery/Hot Topic Detection
Spam causes different problems as broadly we discussed on
Truth discovery plays a distinguished role in information age top but more precisely spam causes misuse of traffic, computa-
as we need accurate information now more than ever. In dif- tional power and storage space. This study also explains that
ferent application areas truth discovery can be beneficial espe- many other different techniques can be helpful for spam detec-
cially where we need to take critical decisions based on the tion like Email filtering, Blacklists unauthorized addresses,
reliable information extracted from different sources e.g. White lists, Legal actions and many more (Siponen et al. 2006).
Healthcare (Yaliang et al. 2016), crowd sourcing (Tschiatschek et Below in two tables we just gave an overview of data and
al. 2018) and information extraction (Highet 1972). Some cases knowledge side and specific application domain where they
we have the information but we are unable to explain so those applied to resolve the issue.
cases knowledge engineering can take part and we can better
predict as per the learning from the previous results.
Data Driven sys- pervised or unsupervised methods. Those approaches are not
Authors Year
tems/applications providing good results due to non availability of gold standard
data set that can help to train and evaluate the classifier and
Data Driven Modelling produce good results (Subhabrata et al. 2015). It is the fact that
framework For Water Distri- Zheng et al. 2017 motivations and psychological states of mind of people can be
bution System different from the professionals in the real world. Different
groups are working now to combat this hot issue and for that
Data Driven Fuzzy Model-
Rosa et al. 2017 purpose they are thinking to utilize actual dataset rather than
ling
opinions, blogs. To tackle the problem fake news detection we
Data Driven approach for need to incorporate both behavioral and social entities and to
Chen et al. 2016
counting apples & oranges combine knowledge and data. In this chapter we try to discuss
all possible types of fake news and impact of that news social-
Data Driven spoken language ly so on the basis of literature evaluation we can say that it is
Yulan et al. 2003
understanding system also possible to detect fake news with different known facts
Health and management Delesie et al. 2001 like time, location, quality, and stance of others. With these
types of measure similarities we can detect the quality of news.
Data quality Feelders et al. 2000
In next chapter we will discuss proposed combination statistic-
Table 1: Data Driven Applications al analysis of publically available dataset just for understand-
ing the issue more deeply.
Combination of DD & Knowledge
Authors Year
in different applications Method
Combination of knowledge and Dehghan Learning from data and engineered knowledge to overcome
2015 fake news issue on social media. To achieve the goal a new
data driven methods for de- et.al
identification of clinical narratives combination algorithm approach (Figure-8) shall be developed
which will classify the text as soon as the news will publish
online.
Extending knowledge driven ac- Gorka et al. 2015
tivity models through data driven
learning techniques
From model, signal to knowledge:
Dai et al. 2013
A data driven perspective of fault
detection and diagnosis
A hybrid knowledge based and Pivovarov et
2012
data driven approach to identify- al.
ing sementically similar concepts
Combining knowledge and data
driven insights for identifying risk Sun et al. 2012
factors using electronics health
records
Figure 8: Block Diagram of the framework
A Dual process model of defence Pyszozynski In developing such a new classification approach as a starting
against conscious and unconscious 1999
et al. point for the investigation of fake news we first applied publi-
death related thoughts: An exten- cally available data set for our learning. The first step in fake
sion of terror management theory news detection is classifying the text immediately once the
Table 2: Combinations of Data Driven and Knowledge news published online. Classification of text is one of the im-
portant research issues in the field of text mining. As we knew
Discussion that dramatic increase in the contents available online gives
raise problem to manage this online textual data. So it is im-
We discussed different approaches that have been defined in portant to classify the news into the specific classes (Fake, Non
the last few years to overcome the problem of fake news detec- fake, unclear).
tions in social networks. Most of the approaches based on su-
Classification of millions of news that published online ma- In this section we discussed step by step process that how can
nually is time consuming and expensive task. So before going we combine learning from data and engineered knowledge in
to the automatic text classification we need to understand dif- order to combat fake news detection in social networks. Once
ferent text classification techniques (Nidhi et al. 2011). the news will publish online then classifier will classify the
text into the classes fake, non fake and unclear. After text clas-
Selection/Collection of news articles sification then we will check the stance of that particular news
which categories the news into four categories agree, disagree,
For training and understanding the classifier we try publically discuss and not related. In the next step we will apply fact
available dataset13 that is based on the collection of approx- checking that will refine our results as fact checking uses engi-
imate seventeen thousand news articles extracted from online neered knowledge in order to analyze the content of the text
news organizations: Location of the article (country); publica- and it will compare it to the known facts (see figure).
tion details (Organization, author, date, unique id) Text (Title,
full article, online link) & Classification details.
Data extraction and analysis
The dataset was already sorted qualitatively by the different
categories like fake, not fake, bias, conspiracy & hate. Further
we classify data with different result indicators (replies, par-
ticipants, likes, comments and total number of shares). In next
step we will show the outcomes of that dataset that will help
us to understand the process.
Results Extracted from dataset and Future goal
The details of the dataset with classified attributes mentioned Figure 10: Proposed Combination of Data Driven and Knowledge
above in collection tab but here in the figure-9 we just hig-
hlighted the results we obtained that how can we specify
claims that can be helpful in combination of proposed tech- When the news published online our proposed clas-
niques. From 17946 news articles, 12460 articles were biased sifier will check the similarity between words, text
category, 572 were fake articles, 870 articles were conspiracy and overall similarity. As per the literature study we
category and 2059 were non-fake articles. have come to know that in news dataset SVM can be
good for starting due to its dealing with data as we
need to do some mathematical expressions so for that
purpose may be we need to use some other library
API, s so in those cases it can perform well. Neural
network produce good results but if and only if we
have large sample size and large storage space. It‟s
also intolerant with noise. Term graph is preferred es-
pecially when we have adjacent words and our objec-
tive to maintain the correlation between classes.
Bayesian classifier can also perform well but only in
that case where we have less data set.
In Stance detection method we will check the view-
Figure 9: Comparison results
point of the reader of the news is in favor or against
or neutral. As per literature there are two ways to
Our proposed combination diagram contains two parts data
represent user stance explicit and implicit. In explicit
and knowledge part that further classification of these two can
the readers give direct expressions like thumb up or
be seen in the diagram. Data side contains text classification
thumb down. Implicit stances we extracted results
and stance detection while knowledge side contains fact check-
from social media.
ing that will help us to refine the results. We categorizer our
Finally we will apply fact checking that will work on
task in three parts and at the end we will combine the results to
two points check worthy claims and discriminate the
check the news status fake or not fake.
veracity of claims. We will apply key basis and view-
points of users on that particular news. Examples of
fact checking is open web and structured knowledge
Discussion graphs.
At the end we will automate our proposed combination that
13
https://www.kaggle.com/mrisdal/fake-news/version/1 can classify the text automatically and after stance detection
and fact checking we will be in the position to get results that News about Hillary Clinton has its origin in Russia), We can
the news is fake or not fake. check news quality wise it is more probable that fake news do
In this survey, we have covered previous efforts to the devel- not have mentioned their sources, simply claim something,
opment of a fake news system that can detect and resolve the while for real news the source is mention and also we can
veracity of rumors individually. We discussed in introduction check the time of the news as whether the same news appears
section that fake contents after 2016 Presidential elections it in other media or sourced if it is repeated more often in the
became a big issue and we also knew that the rumor veracity beginning, because they are interesting, and become recog-
value is unverifiable in the early stages and subsequently re- nized as fake with the time, which reduces the repetition or
solved as true or false in a relatively short period of time or it they are deleted from some websites. At this stage we don‟t
can also remain unverified for a long time. We also discussed have definitive solution but after detailed literature review we
different detections systems which have distinct characteristics can say that it‟s true that producing more reports with more
but also commonalities with rumors so it is difficult to detect facts can be useful for helping us to make such decisions and
only with the help of data driven. The approaches discussed in find technical solutions in fake news detection.
this article are designed to tackle the fake news issue somehow Combination of machine learning and knowledge engineer-
but it is desired that the integration can be helpful for detection ing can be useful for fake news detection as it looks like that
(Figure-10). fake news may be the most challenging area of research in
Since the fake news producers seems to improve their sharing coming years.
strategies to avoid text classification and detection techniques,
fake news detection organizations are required to update their
strategies.
Conclusion References
Recently after US presidential elections social media often Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond mifor-
become a trained vehicle of spread misinformation and hoaxes. mation: Understanding and coping with the “post-truth” era. Journal
No necessary instruments and cognitive abilities required to of Applied Research in Memory and Cognition, 6(4), 353-369.
assess the credibility of the other person just come and share Rainie, H., Anderson, J. Q., & Albright, J. (2017). “The future of free
your opinion on social media. May be this one has no serious speech, trolls, anonymity and fake news online" Washington, DC:
consequences if only to share or spread rumors of less impor- Pew Research Center.
tant but it can be a serious problem when the consumers can Scott L. Althaus & David Tewksbury (2000) Patterns of Internet and
purchased products on the basis of these rumors or sometimes Traditional News Media Use in a Networked Community, Political
serious security issues. Especially in the context of politics that Communication, 17:1, 21-45, DOI: 10.1080/105846000198495.
influence public opinion when individuals run small scale or Leonhardt, D., Thompson, S. A.(2017). "Trump's Lies". New York
large scale organizations only to ruin someone credibility (e.g., Times.
Donald Trump & Hillary Clinton election). In this paper we try Himma, M., (2017). "Alternative facts and fake news entering journa-
to cover the work that includes: knowledge based and style listic content production cycle". Cosmopolitan Civil Societies: An
based. Then further we try to explain the sub categories that Interdisciplinary Journal. 9 (2): 25–41. Doi: 10.5130/ccs.v9i2.5469.
occur in these two domains e.g. Social context based, propaga- Opinion | Fake News and the Internet Shell Game - The New. Nov.
28, 2016; Image. Credit Credit ... That fact, even more than the
tion based, stance based etc. We try to taken into consideration
spread of fake news, can be its own sort of shell game. One that we
the effect of fake news on social platforms. We also try to cov- are pulling on ourselves..Follow the new York
er some context where false news generates serious issues for Times.https://www.nytimes.com/2016/11/28/opinion/fake-news-and-
the individuals involved. We have presented state of the art the-internet-shell-game.html
block diagram that is the combination of knowledge (Fact- J. Soll., (2016). „The long and brutal history of fake news‟
checking) and data (Text classification, Stance detection). As https://www.politico.com/magazine/story/2016/12/fake-news-
we already discussed that the important open issue is the non history-long-violent-214535.
availability of a gold standard dataset and predefined bench-
mark as well as collection of large amounts of fake articles W. Ferreira and A. Vlachos, “Emergent: a novel data-set for stance
dataset. So on the basis of the points we highlighted one can classification,” in Proceedings of the 2016 Conference of the North
American Chapter of the Association for Computational Linguistics:
say that in big data era still the problem has not been received Human Language Technologies, ACL, 2016.
the attention it deserves. But yes few approaches we discussed
Saif M Mohammad, Parinaz Sobhani, and Svet- lana Kiritchenko.
in expert oriented section that have been proposed automatical- Stance and sentiment in tweets. ACM Transactions on Internet Tech-
ly asses the fact checking and credibility assessment of news. nology (TOIT), 17(3):26, 2017.
We can analyze fake news differently with different measure Michal Lukasik, Trevor Cohn, and Kalina Bontcheva. 2015a. Classi-
similarities e.g we can detect whether the same news published fying tweet level judgements of rumours in social media. In Proceed-
by other media agencies or not, We can check the location of ings of the 2015 Conference on Empirical Methods in Natural Lan-
the news Maybe a news has a higher probability of being fake, guage Processing (EMNLP‟15). 2590–2595.
if it is generated somewhere else and not at the location they Michal Lukasik, P. K. Srijith, Duy Vu, Kalina Bontcheva, Arkaitz
deal with (e.g. Trump writes about China or Arabian States, Zubiaga, and Trevor Cohn. 2016. Hawkes processes for continuous
time sequence classiication: An application to rumour stance classii- //techcrunch.com/2017/04/14/facebook-runs-full-page-newspaper-
cation in Twitter. In Proceedings of 54th Annual Meeting of the As- ads-against-fake-news-in-france-ahead-of-the-election.
sociation for Computational Linguistics. Association for Computa- D. Lazer, M. Baum, N. Grinberg, L. Friedland, K. Joseph, W. Hobbs,
tional Linguistics, 393–398. and C. Mattsson. Combating fake news: An agenda for research and
G. L. Ciampaglia, P. Shiralkar, L. M. Rocha, J. Bollen, F. Menczer, action. May 2017. URL https://shorensteincenter.org/combating-
and A. Flammini, “Computational fact checking from knowledge fake-news-agenda-for-research/
networks,” PlOS ONE, vol. 10, no. 6, p. e0128193, 2015.
N. Lao and W. W. Cohen, “Relational retrieval using a combination F. Da Silva and M. Englind. Troll detection: A comparative study in
of path-constrained random walks,” Machine Learning, vol. 81, no. detecting troll farms on twitter using cluster analysis. DD151X Ex-
1, pp. 53–67, 2010. amensarbete i Datateknik, grundniv˚ a, 2016. URL http://www.diva-
portal.org/smash/ get/diva2:927209/FULLTEXT02
B. Shi and T. Weninger, “Discriminative predicate path mining for
fact checking in knowledge graphs,” Knowledge-Based Systems, G. Nygren and G. H ok. Ukraine and the information war - journal-
vol. 104, pp. 123–133, 2016. ism between ideal and self-esteem. The Federal Agency for Protec-
tion and Preparedness, 2016.
L. Katz, “A new status index derived from sociometric analysis,”
Psychometrika, vol. 18, no. 1, pp. 39–43, 1953. Wang, W.Y., 2017. " Liar, Liar Pants on Fire": A New Benchmark
Dataset for Fake News Detection. In Proceedings of the 55th Annual
L. A. Adamic and E. Adar, “Friends and neighbors on the web,”
Meeting of the Association for Computational Linguistics (Volume
Social networks, vol. 25, no. 3, pp. 211–230, 2003.
2: Short Papers) (Vol. 2, pp. 422-426).
D. Liben-Nowell and J. Kleinberg, “The link-prediction problem for
People and Responsibilities. Reich Conference 2017, 2017. URL
social networks,” Journal of the American society for Information
https://www.youtube.com/ watch? V = h7 fTVDYzNM.
Science and Technology, vol. 58, no. 7, pp. 1019–1031, 2007.
Andreas Vlachos and Sebastian Riedel. Fact checking: Task defini-
Lazer, D, Mathew A Baum, Yochai Benkler. (2018).The science of
tion and dataset construction. ACL‟14.
fake news. [online] Scholar.harvard.edu. Available at:
https://scholar.harvard.edu/files/mbaum/files/science_of_fake_news. John Houvardas and Efstathios Stamatatos. N-gram feature selection
pdf. for authorship identification. Artifi- cial Intelligence: Methodology,
Systems, and Applica- tions, pages 77–86, 2006.
Hasher, L.,Goldstein, D., & Toppino, T. (1977). Frequency and the
conference of referential validity. Journal of Verbal Learning and Michele Banko, Michael J Cafarella, Stephen Soder- land, Matthew
Verbal Behavior, 16, 107-112. doi:10.1016/S0022-5371(77)80012-1. Broadhead, and Oren Etzioni. Open information extraction from the
web. In IJCAI‟07.
Begg, I. M., Anas, A., & Farinacci, S. (1992). Dissociation of
processes in belief: Source recollection, statement familiarity, and Amr Magdy and Nayer Wanas. Web-based statistical fact checking
the illusion of truth. Journal of Experimental Psychology. General, of textual documents. In Proceedings of the 2nd international work-
121, 446-458. doi:10.1037/0096-3445.121.4.446. shop on Search and mining user-generated contents, pages 103–110.
Bacon, F. T. (1979). Credibility of repeated statements: Memory for ACM, 2010.
trivia. Journal of Experimental Psychology. Human Learning and
Memory, 5, 241-252. doi:10.1037/0278-7393.5.3.241. Saif M Mohammad, Parinaz Sobhani, and Svet- lana Kiritchenko.
Stance and sentiment in tweets. ACM Transactions on Internet Tech-
Potts, G. R., St. John, M. F., & Kirson, D. (1989). Incorporating new
nology (TOIT), 17(3):26, 2017.
information into existing world knowledge. Cognitive Psychology,
21, 303-333. doi:10.1016/0010-0285(89)90011-X.
Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaoz-
Zhai et al (2005) Tracking News Stories across Different Sources, hu Mei. Rumor has it: Identifying misinformation in microblogs. In
MM‟05, November 6–11, 2005, Singapore. EMNLP‟11.
Zhiwei Jin, Juan Cao, Yongdong Zhang, and Jiebo Luo. News verifi-
H. Allcott and M. Gentzkow (2017). Social media and fake news in cation by exploiting conflicting social viewpoints in microblogs. In
the 2016 election. National Bureau of Economic Research. URL AAAI‟2016.
http://www.nber.org/ papers/w23089.
Zhiwei Jin, Juan Cao, Yu-Gang Jiang, and Yongdong Zhang. News
Mosseri, A. (2017). Working to stop misinformation and false credibility evaluation on microblog with a hierarchical propagation
news. Newsroom. fb. com. model. In ICDM‟2014.
M. Wohlsen. Stop the lies: Facebook will soon let you flag hoax news Manish Gupta, Peixiang Zhao, and Jiawei Han. Eval- uating event
stories, May 2015. URL https://www.wired.com/2015/01/facebook- credibility on twitter. In PSDM‟12.
wants-stop-lies-letting-users-flag-news-hoaxes/
Martin Potthast, Sebastian K¨opsel, Benno Stein, and Matthias Hagen.
Stanford History Education Group. Evaluating information: The cor- Clickbait detection. In European Conference on Information Retriev-
nerstone of civic online reasoning, Nov. 2016. URL al, pages 810–817. Springer, 2016.
https://stacks.stanford.edu/file/druid:fv751yt5934/SHEG%20Evaluat
ing%20Information%20Online.pdf Fan, C. (2017). Classifying Fake News. conniefan.com.
J. Guynn. Facebook begins flagging ‟disputed‟ (fake) news, Mar. Ruchansky, N., Seo, S., & Liu, Y. (2017, November). Csi: A hybrid
2017.URL https://www.usatoday.com/story/tech/news/2017/03/06/ deep model for fake news detection. In Proceedings of the 2017 ACM
facebook-begins-flagging-disputed-fake-news/98804948. on Conference on Information and Knowledge Management (pp. 797-
806). ACM.
R. Dillet. Facebook runs full page newspaper ads against fake news
in france ahead of the election, Apr. 2017. URL https:
Nidhi, Vishal Gupta. Recent Trends in Text Classification Tech-
niques. International Journal of Computer Applications (0975 –
8887) Volume 35– No.6, December 2011.
Rubin, V., Conroy, N., Chen, Y., & Cornwell, S. (2016). Fake news
or truth? using satirical cues to detect potentially misleading news.
In Proceedings of the Second Workshop on Computational Ap-
proaches to Deception Detection (pp. 7-17).
Bourgonje, P., Schneider, J. M., & Rehm, G. (2017). From clickbait
to fake news detection: an approach based on detecting the stance of
headlines to articles. In Proceedings of the 2017 EMNLP Workshop:
Natural Language Processing meets Journalism (pp. 84-89).
Bajaj, S (2017). “The Pope Has a New Baby!” Fake News Detection
Using Deep Learning.
Tresh. M & Luniewski. A. ACM 1995, In Proceed-ings of the
fourth international conference on Infor-mation and knowl-
edge management (pp. pp. 226-233)
Rubin, V. L., Chen, Y., & Conroy, N. J. (2015, November). Decep-
tion detection for news: three types of fakes. In Proceedings of the
78th ASIS&T Annual Meeting: Information Science with Impact:
Research in and for the Community (p. 83). American Society for
Information Science.
Zhou, X., Cao, J., Jin, Z., Xie, F., Su, Y., Chu, D., ... & Zhang, J.
(2015, May). Real-Time News Cer tification System on Sina Weibo.
In Proceedings of the 24th International Conference on World Wide
Web (pp. 983-988). ACM
Guha, S. (2017). Related Fact Checks: a tool for combating fake
news. arXiv preprint arXiv:1711.00715.
Yaliang Li, Jing Gao, Chuishi Meng, Qi Li, Lu Su, Bo Zhao, Wei
Fan, and Jiawei Han. A survey on truth discovery. ACM Sigkdd
Explorations Newsletter, 17(2):1–16, 2016.
Tschiatschek, S., Singla, A., Gomez Rodriguez, M., Merchant, A., &
Krause, A. (2018, April). Fake News Detection in Social Networks
via Crowd Signals. In Companion of the The Web Conference 2018
on The Web Conference 2018 (pp. 517-524). International World
Wide Web Conferences Steering Committee.
Highet, G. (1972). The Anatomy of Satire. Princeton, N.J: Princeton
University Press.
Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and
Rob Procter. Detection and resolution of rumours in social media: A
survey. arXiv preprint arXiv:1704.00656, 2017.
Martin Potthast, Sebastian K¨opsel, Benno Stein, and Matthias Ha-
gen. Clickbait detection. In European Conference on Information
Retrieval, pages 810–817. Springer, 2016.
Siponen M, Stucke C (2006) Effective anti-spam strategies in com-
panies: an international study. In: Proceedings of HICSS ‟06 6.
Subhabrata Mukherjee and Gerhard Weikum. Leveraging joint inte-
ractions for credibility analysis in news communities. In CIKM‟15.
Nidhi, Vishal Gupta. Recent Trends in Text Classification Tech-
niques. International Journal of Computer Applications (0975 –
8887) Volume 35– No.6, December 2011.