=Paper= {{Paper |id=Vol-2747/paper3 |storemode=property |title=An interactive prototype to spot Fake News in young people |pdfUrl=https://ceur-ws.org/Vol-2747/paper3.pdf |volume=Vol-2747 |authors=Erick Lopez-Ornelas,Rocío Abascal-Mena }} ==An interactive prototype to spot Fake News in young people== https://ceur-ws.org/Vol-2747/paper3.pdf
     An interactive prototype to spot Fake News in young
                           people


                        Erick López Ornelas1, Rocío Abascal Mena1
                    1
                     Universidad Autónoma Metropolitana - Cuajimalpa
                        {elopez,mabascal}@cua.uam.mx




       Abstract. Do you know how to identify Fake News? This project intended to
       ensure that young people under twenty years with access to the internet and
       social networks can identify important elements of news to distinguish from
       fake ones. The creation of an interactive prototype will be guided taking into
       account the importance of the user experience (UX) for the development of the
       interactive prototype. This paper explains the main steps in the methodology in
       order to recommend some issues to spot Fake News in young people.

       Keywords: Fake News, User Center Design, User Research Methods, User
       Experience




1 Introduction

The term “fake news” (FN) was officially ushered into the lexicon when the Oxford
Dictionary added the term in 2017 [1]. While the term is frequently used and
definitions vary, the problem of deceptive data is serious and exposes a profound and
underlying flaw in information and network security models. This flaw is trust in
entities without verification of the content that they exchange.
“Trust but verify” [2] is an old proverb that, until recently, resulted in trust at the
expense of verification. Fake news has become a social phenomenon due to the
confusion they cause in society. Every day, individuals through social networks are
surrounded by large amounts of information of all kinds, however, more information
is not synonymous with having reliable data, on the contrary, increasingly more sites
and publications dedicated to disseminate proliferate distorted information or, worst,
totally fake [3]. False information is very harmful to society, as it can escalate to
unexpected levels and manipulate the process of making important decisions such as
political elections [4]. Malicious processes of this nature directly affect our lives,
since they involve our environment and our person. The purpose of this study, is to
contribute to the right identification of the different characteristics of a news item, in
order to improve the users' criteria when browsing through social networks (Facebook
in particular) and internet pages. We focus on young people, because they are the




Copyright c 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0).




                                               1
ones who naturally experience digital technology also, they receive and spread large
amount of data and news on internet.


2 Background

Detecting fake news in social media has been an extremely important, yet technically
very challenging problem. In one study, human judges, by a rough measure of
comparison, achieved only 50- 63 % success rates in identifying fake news [5]. The
most of fake news detection algorithms try to linguistic cues [6]. Several successful
studies on fake news detection have demonstrated the effectiveness of linguistic cue
identification, as the language of truth news is known to differ from that of fake news
[7]. For example, deceivers are likely to use more sentiment words, more sense-based
words (e.g., seeing, touching), and other-oriented pronouns, but less self-oriented
pronouns.
   Compared to real news, fake news shows lower cognitive complexity and uses
more negative emotion words. However, the linguistic indicators of fake news across
different topics and media platforms are not well understood. Rubin, points out that
there are many types of fake news, each with different potential textual indicators [5].
This indicates that using linguistic features is not only laborious but also topic/media
dependent domain knowledge, thus limiting the scalability of these solutions.
   In addition to lexical features, speaker profile information can be useful [8].
Speaker profiles, including party affiliations, job title of speaker, as well as topical
information which can also be used to indicate the credibility of a piece of news. For
the study use profile information, [9] proposes a hybrid CNN model to detect fake
news, which uses speaker profiles as a part of the input data. The Long-Short memory
network (LSTM), as a neural network model, is proven to work better for long
sentences [10]. Attention models are also proposed to weigh the importance of
different words in their context. Current attention models are either based on local
semantic attentions [11] or user attentions [12].
   The difficulty comes partly from the fact that even human beings may have
difficulty identifying between real news and fake news. Even worse, young people are
not aware of the information they receive and replicate. This is the main reason of
using User Center Design in order to identify the real experience in our target user.
   For the development of this study, the User Experience (UX) will be considered as
a core point in its elaboration, in order that the product really covers the specific
needs of the user and understands their interaction with the environment.


3 User Research Methods

The UX process consists of several stages, in which the main objective is to identify
the problem to be treated.
   The first stage was to generate a brainstorm that would lead to a theme that
represents a real problem in the environment. The result was the theme of Fake News.




                                             2
   The second stage consisted of using the "Persona" methodology, which is a very
useful method to define the different user profiles for the study [13]. After analyzing
the different profiles, we decided to analyze young people (16-20-year-old), since
young people are the largest consumers of Social Networks, where the Fake News
usually circulates (Fig. 1).




                 Fig. 1. Selected user based on “Persona” methodology.

It was important to collect information directly from the user, for this step some
interviews were conducted [5]. In the first phase some interviews were applied in one
group (16-20 years old) with different educational levels.
   With this interviews, individuals were questioned about their Internet consumption
and how they search for information and specially how they react with news. Through
these interviews, relevant data were found such as:

    -    Young people trust, especially in news with a character of urgent (natural
         disasters, for example) in traditional media, especially on television and
         radio.

   After that, some surveys were conducted in order to deepen the information
obtained. With this surveys we discover the capacity of young people to identify Fake
News and the reliability they have in the information they find in their day to day on
social networks and websites. Similarly, it was verified if the user relied on traditional
media and what verification strategy they used to identify fake news.
   From the data obtained in the surveys, a series of premises similar to those of the
interviews was obtained, but with a greater degree of precision, such as:

    -    Young people use the Internet not only for entertainment, it is also one of
         their main sources to conduct school researches and a way to obtain
         information of all kinds (tutorials, films, images, etc.).




                                              3
    -    They do not trust completely in what they find in social networks, they have
         many doubts and prefer to check what was discovered in another type of
         media (mainly traditional).
    -    They try to consult multiple sources. They do not usually share information
         immediately, many of them prefer to verify the veracity of it before doing so.
    -    They tend to give importance only to the things they like or get their
         attention.

   Finally, the needs of the users were analyzed and our target user was identified.
These young people of school age were selected due to their high Internet
consumption for their various tasks and their adaptation to technology. In addition,
they have had greater contact with social networks.

   Once knowing our user, we had to check if the findings were true, that is why the
following study was designed.


The study

Once the users were identified and knowing their consumption, an interactive
prototype was used to verify the true reaction of the users when they interact with a
fake news (see Fig. 2).
   The prototype developed is an interactive application where the user interacts and
try to identify some fake news from the good ones. It can be seen like an interactive
game because the user has a reduced time to decide if a news is fake or not. [4].
   The interactive prototype was originally planned to have two modes, one
individual and one in a competition of maximum four people.
   In the first case, the individual starts almost immediately and interact with the
prototype. In the first step, the instructions are given. Then, the interaction starts with
five questions that were classified as intermediate (some news that are not so easy to
identify if they are fake or not), once those five questions have been answered the
prototype detects the level of expertise of the participant and continues with five more
questions (either of a lower or higher difficulty) in order to have a complete round of
ten questions.




                                              4
                          Fig. 2. Example of the news displayed.


Finally, the prototype based on your answers, shows the level of real experience you
have to identify fake news. Based on this level of experience, the prototype shows
some simple recommendations to verify fake news (see Fig. 3).
   In the case of the multiplayer, each one would have the opportunity to scan through
a QR code an extension of the interface to run the prototype using their smartphone.
In this case, the prototype screen would only serve as a dashboard that would show, in
real-time, the questions, the response options, the progress of the participants, the
highest and lowest scores and the problems of each user to identify fake news.
   Both prototype modes would allow at the end to know the level that the user has to
identify fake news, a feedback is show in all cases with the recommendations
strategies to identify fake news. Once this process is finished, an infographic is
generated in order to print it or save it with the recommendations proposed by the
prototype (see Fig. 4).

These recommendations [14] that are shown to the user can be summarized in:
    - Consider the source. The user needs to investigate the site, its mission and its
        contact info.
    - Read beyond. Headlines can be outrageous in an effort to get likes, the user
        need to read the whole story.
    - Check the author. The user need to verify the author and their credibility.
    - Supporting sources. The user need to determine if the info given actually
        supports the story.
    - Check the date. Reposting old news stories doesn’t mean they are relevant to
        current events
    - Is it a joke? The user needs to research the site and author.
    - Ask the experts. Ask a professor or a librarian.




                                            5
                             Fig. 3. First feedback of the prototype




         Fig. 4. Infographics generated as a result of the interaction with the prototype


User Testing and evaluation


For this phase, two evaluation studies were performed. First a set of heuristic
evaluations were performed, where the viability of the prototype was verified. This




                                                6
study was conducted by 30 users between 16 and 20 years old and the items that were
verified were: simplicity, consistency, feedback, affordance, flexibility, perceptibility
and ease of use.
    After that a real user testing was performed. The study was conducted with 5 real
users, the result was recorded in order to be analyze and improve the prototype (Fig.
5).




                                    Fig. 5. User testing


Conclusions


This paper proposes a prototype that help the user to spot fake news. This is a
complete methodology (User Center Design) to propose a solution of the real
problem. The project starts with the user identification and its problematic. Then a
prototype implementation process is made. Finally, a complete evaluation was
conducted in order to verify the viability of the proposal.
   Young people with their great interaction with technology and in particular with
social networks, must have tools that help them make better decisions. Fake news
circulates freely and daily on social networks and it is important to make young
people aware that not all news we receive is real and that we must learn to identify
fake news. The prototype proposal tries to raise awareness about this problem so that
young users can navigate safely and reliably on the networks.




                                             7
References

 1.   Oxford dictionaries, website,
 2.   https://www.oxforddictionaries.com/press/news/2016/12/11/WOTY-16. 2017.
 3.   Collins dictionary, website, https://www.collinsdictionary.com/woty. 2017.
 4.   Conroy, N. J., Rubin, V. L. and Chen, Y.: Automatic deception detection: Methods for
      finding fake news. Proc. Assoc. Info. Sci. Tech., (2015) 52: 1-4.
      DOI:10.1002/pra2.2015.145052010082
 5.   Allcott, H., Gentzkow, M.: Social Media and Fake News in the 2016 election. Journal of
      Economic Perspectives. 31 (2015) (2): 211–236. doi:10.1257/ jep.31.2.211.
 6.   Rubin, V., Chen, Y., Conroy, N.: Deception detection for news: three types of fakes.
      Proceedings of the Association for Information Science and Technology, 52(1): (2015) 1–
      4.
 7.   Ruchansky, N., Atali, S., Liu, Y.: Csi: A hybrid deep model for fake news. arXiv preprint
      arXiv:1703.06959 (2017).
 8.   Larcker, D., Zakolyukina, A., Detecting deceptive discussions in conference calls.
      Journal of Accounting Research, 50(2) (2012):495–540.
 9.   Long, Y., Xiao, Y., Li, M., Huang, H.: Domain-specific user preference prediction based
      on multiple user activities. In Big Data (Big Data), IEEE International Conference on,
      pages 3913–3921. IEEE (2016).
10.   Wang, W., “Liar, liar pants on fire”: A new benchmark dataset for fake news detection.
      arXiv preprint arXiv:1705.00648 (2017).
11.   Tang, D., Qin, B., Liu, T.: Document modeling with gated recurrent neural network for
      sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in
      Natural Language Processing, pages 1422–1432, (2015).
12.   Yang, Z., Dyer, C., He, X., Smola, A., Hovy, E.: Hierarchical attention networks for
      document classification. In Proceedings of the 2016 Conference of the North American
      Chapter of the Association for Computational Linguistics: Human Language
      Technologies (2016).
13.   Chen, H., Tu, C., Lin, Y., Liu, Z.: Neural sentiment classification with user and product
      attention. EMNLP (2016).
14.   Nielsen, L. Engaging Personas and Narrative Scenarios. Samfundslitteratur, PhD-Series.
      (2004)
15.   Gibson, C., Jacobson, T.: Informing and Extending the Draft ACRL Information Literacy
      Framework for Higher Education: An Overview and Avenues for Research. College and
      Research Libraries 75, no. 3 (2014): 250–4.




                                                8