<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>M. Tavakoli);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>On the Readability of Misinformation in Comparison to the Truth</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohammadali Tavakoli</string-name>
          <email>ali.tavakoli@open.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Harith Alani</string-name>
          <email>harith.alani@open.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Grégoire Burel</string-name>
          <email>gregoire.burel@open.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Knowledge Media institute, The Open University</institution>
          ,
          <addr-line>Walton Hall, Milton Keynes, MK7 6AA</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Psychological studies have demonstrated that much misinformation circulating on the Web tends to be more believable and memorable due to its ease of processing. The readability of a passage is a crucial factor in the ease of processing, as it indicates how easy or dificult it is to read and understand. According to some qualitative research, if online misinformation is easier to read, it becomes stickier and more memorable. In contrast, other studies showed that people are more likely to trust and believe misinformation when it appears to be more complex. As a result of such conflicting findings, it remains unclear how readability is associated with true or false content on the Web in general. This paper aims to gain a deeper understanding of readability through quantitative analysis by applying six readability formulas to four datasets containing both true and false content, as well as across multiple datasets. Our research shows that false claims are generally harder to read than true claims.</p>
      </abstract>
      <kwd-group>
        <kwd>Ease of processing</kwd>
        <kwd>Readability</kwd>
        <kwd>Misinformation</kwd>
        <kwd>False claims</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Papers from psychology have demonstrated through a range of qualitative studies that
misinformation tends to be easier to process in general, and thus easier to believe and remember
[
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Ease of processing, also called processing fluency , refers to the ease with which a piece of
information can be processed by its readers. Understanding what makes misinformation easier
to process is key to producing more efective methods to curb its spread.
      </p>
      <p>
        In textual content, one of the features that influence its ease of processing is readability
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Currently, research is conflicting with respect to how readability is associated with online
misinformation. On the one hand, easy-to-read misinformation is found to stick more to the
readers’ mind [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and on the other hand, people are found to be more likely to trust and believe
more complex information [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This raises the need for analysing information that is known to
be false and comparing its readability measurement with information that is true, to help in
better determining how high/low readability is associated with true/false information online.
      </p>
      <p>To understand how readability relates to these categories, we analysed the readability of true
and false information collected from the Web. To this end, the research question addressed in
this paper is: How readability of misinformation compares to that of true information?</p>
      <p>To address this question, we collect news articles and claims containing false and true content
items (i.e., claims and articles) and analyse them in terms of readability. The main contributions
of this paper are (1) Analyse four datasets of True and False information from the Web; (2)
Measure and compare the readability of the datasets using six diferent readability measures,
and; (3) Demonstrate that misinformation appears to be harder to read than true information.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        The mechanism of assessing the truth by humans often consists of two phases; intuitive and
analytic assessments. Through the intuitive phase, we make a decision on whether to accept
the received information or to begin the analytic assessment process [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. The simpler and
more intuitive the information is to us, the less likely we are to kick-start the analytical process
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Ease of processing of (mis)information is, therefore, an influential factor of how quickly
and intuitively we are prone to accepting such information without proper scrutiny [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Various parameters have been found to be associated with increasing ease of processing, such
as familiarity [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ], compatibility with prior beliefs [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ], perceived credibility of source
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], and social consensus [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15, 16</xref>
        ]. Readability is another key feature for assessing the ease
of processing textual contents and reflects the level of dificulty in which text information can be
read and understood [17]. Some readability studies focused on cosmetic features such as colour
contrast [18] and font type and size [19]. In [19], authors found that 35% more participants were
misled by information when using easier-to-read fonts. In a study with over 92K false and true
news articles, it was found that misinformation was 3% easier to read than true information
[20], where readability was measured using Flesch-Kincaid method (FK) [21] which takes into
account the number of words, sentences, and syllables to calculate the level of readability of
given text.
      </p>
      <p>
        In some scenarios, readability was found to play a rather surprising role. For example, in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
authors found that when providing text with either False or True information, the participants
trusted the harder-to-read text regardless of its veracity. The authors concluded that reading
dificulty gave a stronger perception of truthfulness [ 22]. Other researchers found that readers
tend to invest less cognitive efort in judging the truthfulness of news when they have a higher
level of reading dificulty, i.e., they believe the information based on face value [ 23].
      </p>
      <p>Some of the readability measures have been used as classification attributes to distinguish
between true and false information. FK and GFI (see section 3.3) for example have been used
along with several other lexical, stylistic, and grammatical features by Horne and Adah [24], in
an SVM-based model to classify news articles into true, false, and satire. The authors concluded
that the style and complexity of fake content are significantly diferent from real one, yet, it is
more closely related to satire than to real. They found that the readability related features cause
improvement in classifying news articles into the target classes. A similar model was built
in [25] to classify Portuguese news articles into true and false. The authors used 165 textual
features including some readability measures adopted for the utilised language. Although it is
yet unknown whether their findings from investigating the Portuguese data are generalizable
to English and other languages, they show that the classifiers with readability-related features,
such as DCI and GFI (see section 3.3), in turn, achieve higher accuracy. These studies, however,
lack a proper analysis to investigate how each of these features is associated with true and false
information and to what extent these associations difer from each other.</p>
      <p>From the above, it is clear that readability can be measured in diferent ways and can have
diferent impacts on misinformation. Our work in this paper difers from the state of the art
in that we apply multiple computational methods for calculating readability, and we perform
this analysis on several datasets of true and false information. Expanding the analysis to more
readability methods and datasets increases the chances of establishing more concrete and
representative evidence on how readability difers between true and false information.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Readability of True and False Information</title>
      <p>The aim of this paper is to measure and compare the readability of online misinformation and
true information to gain a better understanding of how readability difers between the two
categories of content. To achieve this in a systematic manner, the readability score of content
items is calculated using six diferent readability measures (Section 3.3). Apart from three
datasets of short claims, a dataset of full news articles is also processed in our experiment. The
workflow of our experiments is as follows: (1) Collect datasets consisting of true as well as false
claims found on the Web, written in varying lengths (full news articles, short messages); (2)
Pre-process the datasets; (3) Calculate the readability of each content item and aggregate their
values in our four datasets using six readability measures; (4) Evaluate the readability diference
for each of the datasets depending on their true/false labels.</p>
      <sec id="sec-3-1">
        <title>3.1. Datasets</title>
        <p>In our experiments, two diferent types of data are used for readability measurement and
comparison. A dataset of full news articles and another three datasets of short text. Each dataset
consists of true and false claims. The first dataset used in this study is a collection of 5K full
news articles named Fake News Detection Challenge Dataset1 (KDD2020) gathered from a
variety of news websites in 2020. The veracity of each article is manually labelled with 0 or 1,
indicating true and false respectively. The average length of the articles is 27.84 sentences.</p>
        <p>The second dataset is a manufactured collection of 67, 366 claims named FEVEROUS2 (Fact
Extraction and VERification Over Unstructured and Structured information) [ 26]. This dataset
was manually generated in 2021. Each claim is verified against Wikipedia relevant pages by
trained annotators and labelled with SUPPORTED, REFUTED, and NOT ENOUGH EVIDENCE.
For our experiments, we only consider the claims that were either SUPPORTED or REFUTED.</p>
        <p>PubHealth3 [27] is another dataset of claims. The dataset was constructed in 2020 and
consists of 11k claims collected from fact-checking websites (i.e., Politifact, FactCheck, Snopes,
TruthorFiction, and FullFact) and online news sources (i.e., Associated Press, Reuters News,
and Health News Review). In this experiment, an equal number of claims from each source is
selected to avoid bias. The veracity labelling provided with the dataset is true, false, mixture,
and unproven. To meet the need of our experiments, only true and false labels are used.
1Fake News Detection Challenge, https://www.kaggle.com/c/fakenewskdd2020/data.
2FEVEROUS, https://fever.ai/dataset/feverous.html.
3PubHealth, https://github.com/neemakot/Health-Fact-Checking.</p>
        <p>The last dataset of claims is LIAR4 [28] with 12.8k claims. The data is collected from
Politifact.com. The labels used in coding the data are pants-fire , false, barely-true, half-true, mostly-true,
and true. Our focus is on claims that are untrue (pants-fire and false labels) and true.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Pre-processing</title>
        <p>The pre-processing tasks aim to clean and prepare the data for our experiment. The
preprocessing phase consists of the following tasks: discarding duplicates, non-English content
items, short ones consisting of less than 3 words, punctuation letters apart from full stops which
indicate sentence boundaries, and discarding irrelevant or excessively repeated symbols and
characters such as emoji, asterisks, hashes, etc.</p>
        <p>The number of articles in each dataset is not balanced. Therefore, to avoid bias, we selected
the same number of each set (false, true) after cleaning the data and removing noises. Apart
from full articles with no information about their sources available, we balance the number of
claims with regard to the source (e.g., BBC, CBS) to minimize bias that could emerge from a
particular source (e.g., specific writing style or more complex text) for all other datasets. The
ifnal size of the datasets used in our study, along with some statistics about the pre-processing
steps is shown in Table 1.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Readability Measures</title>
        <p>The readability tests that are used in this work for measuring the readability of false and
true content items are listed in Table 2). For each readability metric, we apply the min-max
normalisation method, the scores from each readability measure are therefore normalised
between 0 (very easy to read) to 100 (very hard to read) for comparative purposes.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Readability Comparison Results</title>
      <p>In this section, we describe various comparisons of readability between the true and false sets
in our four datasets, to reach a better understanding of the similarities and diferences in the
overall results as well as the results between the diferent datasets.
4LIAR, https://www.kaggle.com/code/hendrixwilsonj/liar-data-analysis.</p>
      <sec id="sec-4-1">
        <title>4.1. Statistical Comparison of Readability Scores</title>
        <p>To investigate if false and true content items difer in terms of readability scores, we first
compare the means of these scores in all four datasets. Figure 1 shows the distribution of
these readability means across the datasets for both true and false sets. These results suggest
that although readability is relatively diferent across the datasets, they are more comparative
between the true and false sets in each individual dataset. Overall, we observe that the KDD2020
dataset has a lower readability score compared to the other datasets. This may be due to the
item length diference between this dataset and the other analysed datasets.</p>
        <p>To get an understanding of these readability values and the significance of the similarities
or diferences between false and true content items, we obtain the scores from the readability
measures and apply the Mann-Whitney U (MWU) test. For this experiment, the significance
level ( ) is set to 0.05 indicating that any calculated  −   ≤  is showing that a significant
diference exists between readability scores.</p>
        <p>Table 3 represents the results of the MWU test, showing that the content items in the false set
are generally harder to read than the ones in the true set and that these distributions diferences
are statistically significant. The only exception is in FEVEROUS dataset which shows a diferent
pattern. However, as mentioned earlier, this dataset is lab-manufactured and hence is more
likely to difer from the other three more naturally-generated datasets.</p>
        <p>What we can conclude from the statistical analysis above is that the readability of false
content is generally harder than true content in all our datasets except the manufactured one.
This provides computational evidence in support of the common view and most qualitative
studies from psychologists, which argue that falsified information tends to be written in a more
complex fashion to give the perception of depth and truthfulness (see Section 2).</p>
        <p>What remains unknown is how the individual readability parameters difer from one set to
another, which is the focus of the next part of the experiment.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Comparison of Readability Parameters</title>
        <p>As discussed in Section 3.3, each readability formula has several influencing parameters for
calculating readability. To compare the influence of the diferent readability parameters between
the datasets we use the Pearson Correlation Coeficient (PCC). Correlations between each
parameter and the readability of true and false content items across the datasets are represented
in Figure 2. It can be seen that the correlation between the parameters and readability scores
for the formulas is positive in almost all cases. In general, there is a strong correlation between
ASL and the mean value of the readability scores. The figures also show that Char_Wrds also
has a correlation slightly stronger than moderate with the mean value. Such findings enhance
our understanding of why readability is proving to be diferent between true and false content
in our datasets (more on this in Section 5).</p>
        <p>(a) KDD2020 (Top: True / Bottom: False).
(b) FEVEROUS (Top: True / Bottom: False).
(c) PubHealth (Top: True / Bottom: False).
(d) Liar (Top: True / Bottom: False).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>The results of the analysis are illustrated in Figure 1 and reveal that false content items are in
general slightly more dificult to read than true ones. This finding contradicts [ 20] (see Section
2). However, only one dataset was used in [20]. This indicates the need for further quantitative
research to better understand the reasons behind such variation in results.</p>
      <p>The analysis of the datasets showed an inconsistency between the FEVEROUS dataset and the
other datasets in the diference between the readability of false and true content items. Analysing
the FEVEROUS content shows that true claims are more dificult to read than false ones which
contradicts our results from the other datasets (Figure 1). Looking into the collection/creation
process of these datasets, we can infer that the FEVEROUS synthetic dataset is not representative
of the real-world true/false content distributions that are observed in the other datasets since
the claims created in FEVEROUS are written artificially by a limited number of experts from
the misinformation domain rather than naturally authored and published on the Web.</p>
      <p>Regarding the parameters used in the readability formulas, Figure 2 shows that excluding
FEVEROUS for its deviation discussed above, for the rest of the claims datasets (i.e., PubHealth
and Liar), Char-Wrds and ASW are of slightly higher than the moderate correlation with the
mean value of readability scores. However, this is not the case in the dataset of full articles (i.e.,
KDD2020) which shows that these parameters could have more impact when experimenting
with short texts. The impact of them, however, would be minor when using the GFI measure
which might be due to the use of complex words in the measure that diminishes the correlation
of these parameters to the measure as it stands for the words with more than 3 syllables. On the
other hand, ASL has a contradictory pattern appearing to influence best with long documents.
It has a strong relationship with the mean value. Lengthier sentences are used in false news
articles with an average of 29 words per sentence. The average length of the sentences in
true content items, however, is 25. This indicates that these parameters should be considered
when building models for identifying misinformation on the Web. The disparity in content
length between true and false content suggests that brevity and conciseness may be a key
diferentiating factor between misinformation and true information with misinforming content
being more convoluted than true content. Such variety in the correlation of parameters and the
measures between diferent types of content items (i.e., claims and full news articles) enables
future research to be more wisely when selecting features for classifying content items of
diferent types.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Limitations and Future Work</title>
      <p>In this experiment, we looked into readability and its association with misinformation. Apart
from the readability, the concept of ease of processing has other aspects, such as social consensus
and source credibility (see section 2). Analytically investigating their association with
misinformation and discovering relevant features correlated to them would be an interesting angle to
investigate in future.</p>
      <p>In this experiment, the only language considered was English. Although the readability
measures might need modifications to work properly with diferent languages, experimenting
with other languages might result in diferent findings that may highlight the cultural and
structural diferences between languages when dealing with true and false information.</p>
      <p>As discussed in section 3.1, our focus was only on the content items with true and false labels,
while some datasets have additional fine-grained annotations, such as Not enough evidence,
unproven, mixture, etc. Although, including such fine-grained labels in the analysis would make
the experiment more comprehensive, matching labels across various datasets annotated with
diferent guidelines is not straightforward and may result in inconsistent results.</p>
      <p>It is also of great importance to investigate how the association of readability with
misinformation difers across topics. Discovering topic-specific readability patterns and considering
them when building models for detecting misinformation is another research direction.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>Our analysis of four distinct datasets showed that readability, in general, is higher (i.e. more
dificult) for false information compared to true information. We found a strong diference
in the average length of sentences and the number of characters in words in the false and
true content, which could be used in misinformation detection models. We also found that
when measuring the readability of long documents, the average length of sentences is the most
indicative parameter, while the average number of syllables per word and the average number
of characters per word work best with short documents. Our analysis also showed that the
lab-manufactured FEVEROUS dataset produced readability patterns that were inconsistent with
the real-world Web data present in the other datasets. This shows the importance of using
real-world datasets when studying misinformation.</p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work has been partially supported by the European CHIST-ERA program via the UK
Engineering and Physical Sciences Research Council (UKRI - EP/V062662/1) within the CIMPLE
project (grant agreement CHIST-ERA-19-XAI-003).
[16] P. S. Visser, R. R. Mirabile, Attitudes in the social context: the impact of social network
composition on individual-level attitude strength., Journal of personality and social
psychology 87 (2004) 779.
[17] C. Tekfi, Readability formulas: An overview, Journal of documentation (1987).
[18] H. Geofrey, R. Rolf, Forming judgments of attitude certainty, importance, and intensity:
The role of subjective experiences, Personality and Social Psychology Bulletin (1999)
771–782.
[19] H. Song, N. Schwarz, Fluency and the detection of misleading questions: Low processing
lfuency attenuates the moses illusion, Social cognition 26 (2008) 791.
[20] C. Carrasco-Farré, The fingerprints of misinformation: how deceptive content difers from
reliable sources in terms of cognitive efort and appeal to emotions, Humanities and Social
Sciences Communications 9 (2022).
[21] J. P. Kincaid, R. P. Fishburne Jr, R. L. Rogers, B. S. Chissom, Derivation of new readability
formulas (automated readability index, fog count and flesch reading ease formula) for navy
enlisted personnel, Technical Report, Naval Technical Training Command Millington TN
Research Branch, 1975.
[22] B. Lutz, M. T. Adam, S. Feuerriegel, N. Pröllochs, D. Neumann, Identifying linguistic cues
of fake news associated with cognitive and afective processing: Evidence from neurois,
in: NeuroIS Retreat, Springer, 2020, pp. 16–23.
[23] H. A. Simon, Motivational and emotional controls of cognition., Psychological review 74
(1967) 29.
[24] B. Horne, S. Adali, This just in: Fake news packs a lot in title, uses simpler, repetitive content
in text body, more similar to satire than real news, in: Proceedings of the international
AAAI conference on web and social media, volume 11, 2017, pp. 759–766.
[25] R. Santos, G. Pedro, S. Leal, O. Vale, T. Pardo, K. Bontcheva, C. Scarton, Measuring the
impact of readability features in fake news detection, in: Proc. 12th language resources
and evaluation Conf., 2020.
[26] R. Aly, Z. Guo, M. Schlichtkrull, J. Thorne, A. Vlachos, C. Christodoulopoulos, O. Cocarascu,
A. Mittal, Feverous: Fact extraction and verification over unstructured and structured
information, arXiv preprint arXiv:2106.05707 (2021).
[27] N. Kotonya, F. Toni, Explainable automated fact-checking for public health claims, arXiv
preprint arXiv:2010.09926 (2020).
[28] W. Y. Wang, ”liar, liar pants on fire”: A new benchmark dataset for fake news detection,
arXiv preprint arXiv:1705.00648 (2017).
[29] R. F. Flesch, et al., Art of readable writing (1949).
[30] R. Gunning, The fog index after twenty years, Journal of Business Communication 6 (1969)
3–13.
[31] R. Senter, E. A. Smith, Automated readability index, Technical Report, Cincinnati Univ OH,
1967.
[32] J. S. Chall, E. Dale, Readability revisited: The new Dale-Chall readability formula, Brookline</p>
      <p>Books, 1995.
[33] G. Spache, A new readability formula for primary-grade reading materials, The Elementary
School Journal 53 (1953) 410–413.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Schwarz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jalbert</surname>
          </string-name>
          ,
          <article-title>When (fake) news feels true: Intuitions of truth and the acceptance and correction of misinformation</article-title>
          , in: The Psychology of Fake News, Routledge,
          <year>2020</year>
          , pp.
          <fpage>73</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Reber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Greifeneder</surname>
          </string-name>
          ,
          <article-title>Processing fluency in education: How metacognitive feelings shape learning, belief formation, and afect</article-title>
          ,
          <source>Educational psychologist 52</source>
          (
          <year>2017</year>
          )
          <fpage>84</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Rennekamp</surname>
          </string-name>
          ,
          <article-title>Processing fluency and investors' reactions to disclosure readability</article-title>
          ,
          <source>Journal of accounting research 50</source>
          (
          <year>2012</year>
          )
          <fpage>1319</fpage>
          -
          <lpage>1354</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Withall</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Sagi,</surname>
          </string-name>
          <article-title>The impact of readability on trust in information</article-title>
          ,
          <source>in: Proceedings of the Annual Meeting of the Cognitive Science Society</source>
          , volume
          <volume>43</volume>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>K. E.</given-names>
            <surname>Stanovich</surname>
          </string-name>
          , Who is rational?:
          <article-title>Studies of individual diferences in reasoning</article-title>
          , Psychology Press,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Petty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Cacioppo</surname>
          </string-name>
          ,
          <article-title>The elaboration likelihood model of persuasion</article-title>
          , in: Communication and persuasion, Springer,
          <year>1986</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kahneman</surname>
          </string-name>
          , Thinking, fast and slow, Farrar, Straus and Giroux, New York,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>L. E. Boehm,</surname>
          </string-name>
          <article-title>The validity efect: A search for mediating variables</article-title>
          ,
          <source>Personality and Social Psychology Bulletin</source>
          <volume>20</volume>
          (
          <year>1994</year>
          )
          <fpage>285</fpage>
          -
          <lpage>293</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gefen</surname>
          </string-name>
          , E-commerce:
          <article-title>the role of familiarity and trust</article-title>
          ,
          <source>Omega</source>
          <volume>28</volume>
          (
          <year>2000</year>
          )
          <fpage>725</fpage>
          -
          <lpage>737</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Newman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sanson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Quigley-McBride</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Foster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Bernstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Garry</surname>
          </string-name>
          ,
          <article-title>People with easier to pronounce names promote truthiness of claims</article-title>
          ,
          <source>PloS one 9</source>
          (
          <year>2014</year>
          )
          <article-title>e88671</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>W.</given-names>
            <surname>Kintsch</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Walter Kintsch, Comprehension: A paradigm for cognition</article-title>
          , Cambridge university press,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>E. Aronson,</surname>
          </string-name>
          <article-title>The theory of cognitive dissonance: A current perspective, in: Advances in experimental social psychology</article-title>
          , volume
          <volume>4</volume>
          ,
          <string-name>
            <surname>Elsevier</surname>
          </string-name>
          ,
          <year>1969</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Eagly</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Chaiken,</surname>
          </string-name>
          <article-title>The psychology of attitudes</article-title>
          ., Harcourt brace Jovanovich college publishers,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>R. B. Cialdini</surname>
          </string-name>
          , L. James,
          <source>Influence: Science and practice</source>
          , volume
          <volume>4</volume>
          , Pearson education Boston,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Festinger</surname>
          </string-name>
          ,
          <article-title>A theory of social comparison processes</article-title>
          ,
          <source>Human relations 7</source>
          (
          <year>1954</year>
          )
          <fpage>117</fpage>
          -
          <lpage>140</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>