<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Understanding Characteristics of Biased Sentences in News Articles</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sora Lim</string-name>
          <email>lim.sora.88u@st.kyoto-u.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Adam Jatowt</string-name>
          <email>adam@dl.kuis.kyoto-u.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Masatoshi Yoshikawa</string-name>
          <email>yoshikawa@i.kyoto-u.ac.jp</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kyoto University</institution>
          ,
          <addr-line>Kyoto</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Providing balanced and good quality news articles to readers is an important challenge in news recommendation. Often, readers tend to select and read articles which con rm their social environment and their political beliefs. This issue is also known as lter bubble. As a remedy, initial approaches towards automatically detecting bias in news articles have been developed. Obtaining a suitable ground truth for such a task is however di cult. In this paper, we describe ground truth dataset created with the help of crowd-sourcing for fostering research on bias detection and removal from news content. We then analyze the characteristics of the user annotations, in particular concerning bias-inducing words. Our results indicate that determining bias-induced words is subjective to certain degree and that a high agreement on all bias-inducing words of all readers is hard to obtain. We also study the discriminative characteristics of biased content and nd that linguistic features, such as negative words, tend to be indicative for bias.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>In news reporting it is important for both authors
and readers to maintain high fairness, accuracy, and
to keep balance between di erent view points.
However, bias in news articles has become a major issue
[GM05, Ben16] even though many news outlets claim
to have dedicated policy to assure the objectiveness in
their articles. Di erent news sources may have their
Copyright © CIKM 2018 for the individual papers by the papers'
authors. Copyright © CIKM 2018 for the volume as a collection
by its editors. This volume and its papers are published under
the Creative Commons License Attribution 4.0 International (CC
BY 4.0).
own views towards the society, politics and other
topics. Furthermore, they need to attract readers to make
their businesses pro table. This frequently leads to the
potentially harmful reporting style resulting in biased
news.</p>
      <p>To overcome news bias, as a remedy, users often
try to choose news articles from news sources (outlets)
which are known to be relatively unbiased. Ideally, this
should be performed by corresponding recommender
systems. However, bias-free article recommendations
are still not feasible given the state-of-the-art.
Furthermore, the recommendations might not be trusted
by users, as readers often need concrete evidence of
bias in the form of bias-inducing words and similar
aspects.</p>
      <p>In this paper, we focus on understanding news bias
and on developing a high-quality gold standard for
fostering bias-detection studies on the sentence and
word levels. We assume here that word choices made
by articles' authors might re ect some bias in terms
of their viewpoint. For example, the phrases \illegal
immigrants" and \undocumented immigrants" chosen
by news reporters to refer to immigrants in relation
to Donald Trump's decision to rescind Deferred
Action for Childhood Arrivals may be considered as case
where the choice of words can result in a bias. Here,
the use of the word \illegal" degrades the immigrants
by inducing more negative value than in the case of
using the adjective \undocumented". By such nuanced
word choices, news authors may imply their stance on
the news event and deliver biased view to the readers.</p>
      <p>It is, however, challenging to identify words that
cause the article to have biased points of view
[BEQ+15]. The bias inherent in news articles tend to
be subtle and intricate. In this research, we construct
a comparable news dataset which consists of news
articles reporting the same news event. The objective is
to help designing methods to detect bias triggers1 and
1https://github.com/skymoonlight/newsdata-bias
shed new light on the way in which users recognize
bias in news articles. To the best of our knowledge,
this is the rst dataset with annotated bias words in
news articles. In the following, we describe the design
of the crowd-sourcing task to obtain the bias labels
for the news words and we subsequently analyze the
characteristics of detected biased content in news.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Works</title>
      <p>Several prior works have focused on media bias in
general and news bias in particular. Generally,
according to D'Allessio and Allen [DA00], media bias can
be divided into three di erent types: (1) gatekeeping,
(2) coverage and (3) statement bias. Gatekeeping bias
is a selection of stories out of the potential stories;
coverage bias expresses how much space speci c
positions receive in media; statement bias, in contrast,
denotes how an author's own opinion is woven into a
text. Similarly, Alsem et al. [ABHK08] divide news
bias into ideology and spin. Ideology re ects news
outlets' desire to a ect readers' opinions in a particular
direction. Spin re ects the outlet's attempt to simply
create a memorable story. Given these distinctions, we
consider the bias type tackled in this paper as
statement bias w.r.t. [DA00] and as spin bias according to
[ABHK08].</p>
      <p>Several researches made e orts to provide e ective
means for solving the news bias problem. However,
most of them have focused on the news diversi cation
according to the content similarity and the political
stance of news outlets. Park et al. [PKCS09], for
instance, have developed a news diversi cation system,
named NewsCube, to mitigate the bias problem by
providing diverse information to the users. Hambourg
et al. [HMG17] presented a matrix-based news
analysis to display various perspectives for the same news
topic in a two-dimensional matrix. An et al. [ACG+12]
revealed skewness of news outlets by analyzing their
news contents spread throughout tweets.</p>
      <p>Alonso et al. [ADS17] focused on omissions between
news statements which are similar but not identical.
The omission occupies one category in news bias in
that it is a means of statement bias [GS06]. Ogawa et
al. [OMY11] attempted to describe the relationship
between main participants in news articles to detect news
bias. To catch describing way of the relationship, they
expanded sentiment words in SentiWordNet [BES10].</p>
      <p>Other works focused on linguistic analysis for bias
detection on text data. Recasens et al. [RDJ13]
targeted detecting bias words from the revised sentence
history in Wikipedia. They utilized NPOV tags for
bias labels, and linguistically categorized resources for
the bias feature. Baumer et al. [BEQ+15] used
Recasens et al.'s linguistic features to identify biased
lanTotal number of news articles
Total number of sentences
Average tagged sentences per a news article
No. of sentences including tagged words
No. of tagged sentences on agreement level 2
No. of tagged sentences on agreement level 3
No. of tagged sentences on agreement level 4
No. of tagged sentences on agreement level 5
guage in political news as well as features from
theoretical literature on framing.</p>
    </sec>
    <sec id="sec-3">
      <title>Annotating Bias in News Articles</title>
      <p>To detect the subtle di erences which cause bias, one
way is to compare words across the content of di erent
news articles which are reporting the same news event.
This should allow for pinpointing di erences in the
subtle use of words by di erent authors from diverse
media outlets to describe the same event. Although,
many news datasets were created for news analysis, to
the best of our knowledge, none focused on a single
event while, at the same time, covering many news
articles from various news outlets from a short time
range.</p>
      <p>We selected the news event titled \Black men
arrested in Starbucks" which has caused controversial
discussions on racism. The event happened on April
12, 2018. We focused on news articles written on April
15, 2018 as the event was widely reported in di erent
news on that day.</p>
      <p>For collecting news articles from various news
outlets we used Google News2. Google News is a
convenient source for our case as it already clusters news
articles concerning the same event coming from
various sources. We rst crawled all news articles available
online that described the aforementioned event. Based
on manual inspection, we then veri ed whether all
articles are about the same news event. We next extracted
the titles and text content from the crawled pages
ignoring pages which covered only pictures or contained
only a single sentence. In the end, our dataset
consists of 89 news articles with 1,235 sentences and 2,542
unique words from 83 news outlets. Articles contain
on average 14 paragraphs.
3.2</p>
      <sec id="sec-3-1">
        <title>Bias Labeling via Crowd-Sourcing</title>
        <p>To overcome scalability issue in annotations,
crowdsourcing has been widely used [FMK+10, ZLP+15].
We also use crowdsourcing to collect bias labels and
2https://news.google.com/?hl=en-US&amp;gl=US&amp;ceid=US:en
we choose Figure Eight3 as our platform. Figure Eight
(called CrowdFlower until March 2018) has been used
in a variety of annotation tasks and is especially
suitable for our purposes due to the focus on producing
high-quality annotations. We note that it is di cult
to obtain bias-related label information such as binary
judgements on each sentence of news articles, as the
bias may depend on the news event and its context.
To design the bias labeling task, we divided the news
dataset into one reference news article4 and 88 target
news articles. Having a reference news article, users
could rst get familiar with the overall event.
Furthermore, the motivation was to have some reference
text which being relatively bias-free allows for
detecting bias content in a target article. Our reference
article has been selected after being manually judged as
relatively unbiased according to several annotators.</p>
        <p>We let the workers make judgements on each
target news article (using also the reference news article).
Each article has been independently annotated by 5
workers. In order to ensure a high-quality labeling,
we produced various test questions to lter out low
quality answers. To create reliable answers to our test
questions, we conducted a preliminary labeling task on
a set of ve randomly selected news articles from the
same news collection, plus the same reference news
article used for comparison. Nine graduate students
(male: 6, female: 3) labeled bias-inducing words in
these news articles. The words which have been
labeled as \bias-inducing" by at least two people were
considered as \biased" in general and served as ground
truth for our test questions.</p>
        <p>The instructions and main questions given to the
workers in the crowdsourcing tasks and to annotators
in the preliminary task can be summarized as follows:
1. Read the target news article and the reference news
article.
2. Check the degree of bias of the target news article by
comparing with the reference news article.</p>
        <p>not at all biased, slightly biased, fairly biased,
strongly biased.
3. Select and submit words or phrases which cause the
bias, compared to the reference news article.</p>
        <p>Submit words or phrases with the line identi er.
Try to submit as short as possible content and
don't submit whole paragraphs.</p>
        <p>If no bias inducing words are found, submit
\none".
4. Select your level of understanding of the news story
four scale ratings from \I didn't understand at
all." to \I understood well."</p>
        <p>In total, 60 workers participated in the task. We
only used the answers from 25 reliable workers who
passed at least 50% of test questions. Overall, for
3https://www.figure-eight.com/.
4https://reut.rs/2ve3rMz
88 documents, we collected 2,982 bias words (1,647
unique words) covered by 1,546 non-overlapping
annotations.
3.3</p>
      </sec>
      <sec id="sec-3-2">
        <title>Analysis of Perceived News Bias</title>
        <p>We next analyze what kind of words are tagged as bias
triggers by the workers. First, we analyze the phrases
annotated as biased in terms of the word length. Each
annotation consists of four words on average (examples
being \did absolutely nothing wrong", \putting them
in handcu s", \racism and racial pro ling", \merely
for their race", and \Starbucks manager was white").
Most answers submitted by workers are, however,
single words, for example, \accuse", \absurd",
\boycott", \discrimination", and \outrage". These
examples also show a tendency of negative sentiment and
that rather extreme, emotion-related words are
annotated, which could be extracted almost without
considering the context. As second most frequent phrase
pattern, three words in a sentence have been annotated,
such as \absolutely nothing wrong", \accusations of
racism", \black men arrested", \who is black", and
\other white ppl". These are typical combinations of
sentiment words and modi ers or intensi ers. These
sentiment words (with positive or negative polarity)
are typically associated with the overall topic or event
and can also be considered as outstanding or salient
to some degree.</p>
        <p>We aggregated the answers of the crowd-workers on
the sentence level assuming that if a sentence includes
any word annotated as biased, the sentence itself is
biased. Note that the information on sentence level
bias might be enough for the purpose of automatic
bias detection. However, we let users annotate the
speci c bias-inducing phrases, since this lets us gain a
ne-grained insight in the actual thoughts of users and
allows to choose appropriate machine learning features
for bias-detection algorithms, as well as to show
concrete evidence of bias-inducing aspects in the texts to
users. Table 1 shows the statistics of the dataset and
labeled results. Agreement level n denotes that only
annotations tagged by at least n people are
considered. When we only consider the unique, i.e., fusioned
answers from the workers, among 1,235 sentences in
the whole data set, 826 sentences (66.88%) included
bias-annotated words. On average, 73.48% of the
sentences would be then considered potentially biased in
an article. Yet, assuming an agreement of 2 workers
the average number of biased sentences is 34.9%, while
for n = 3 the corresponding number is 14.01%. These
statistics reveal that people consider di erent words as
representing biased content through di erent words.</p>
        <p>Inter-rater agreement. We next investigated the
inter-rater agreement among the ve workers' answers
for the each target news. We calculated
Krippendor 's alpha and pairwise Jaccard similarity coe
cients. Krippendor 's alpha are used for quantifying
the extent of agreement among multiple raters, and
Jaccard similarity is mainly used for comparing the
similarity between two sets. Here, we regard each
sentence in a target news as item to be measured.
The mean scores calculated over all the target articles
are 0.513 for Krippendor ', and 0.222 for Jaccard, as
shown also in Figure 1. The agreement scores show
relatively low tendency which means the answers from
the ve workers are diverse and with slight agreement.
In practice, it is hard to get substantial agreement on
news articles in general [NR10]. This may have several
reasons in our case: Firstly, the degree of perception
concerning bias di ers from person to person.
Secondly, the answer coverage by people is di erent and
imperfect. For example, some people might feel it is
enough to submit around ve di erent answers on a
target news article, while others might try to nd as
many as possible evidences of biased content. It is then
hard to decide whether the di erences are from
insincerity of individuals or the matter of their perception.</p>
        <p>Analysis of POS tags. We investigated the part
of speech tags included in the sentences. The Stanford
POS Tagger [TKMS03] was employed in this process.
To that end, we considered di erent agreement levels,
i.e., the minimum number of users who tag words as
biased in the same sentences. We conducted the
ttest for the bias tagged sentences and non-tagged
sentences. Table 2 shows the statistically signi cant POS
tags under the p-value &lt; 0.001.</p>
        <p>Analysis of further linguistic features. We
also investigate words by using the linguistic
categories proposed by [RDJ13], including sentiment,
subject/object, verb types, named entity and so on. In
Table 3, we observe that the most signi cant word
category is negative subject words in agreement level
1. Also weak subject words and negative words are
shown to be signi cant. We believe this result is
because our news event is controversial and related to
the arrest, therefore, many negative words a ect to
the bias cognition of users. Interestingly, factive verbs
do not show any signi cant di erence.</p>
        <p>For the preliminary experiments, we next use the
POS tags and the mentioned linguistic features for
approaching the task of automatically detecting bias.
We employ a standard SVM model and use randomly
selected 80% of the sentences for training the model
and the remaining 20% of sentences for testing. The
classi cation accuracy is 70%. As our data set is
primarily designed for linguistic analysis, larger numbers
of train/test examples are needed for obtaining more
reliable evaluation results.</p>
        <p>Further extensions. We analyzed bias in the
news sentences perceived by people using
crowdsourcing. In this research, we used a news event that
occurred in a short time period. Thus, users do not need
to spend much time to understand the context of the
news event. However, in case of a long time lasting
news event, the news topic tends to be complicated or
consists of many sub-events and there might be many
aspects to be aware of. For example, politics-related
news events, typically have a long time span when
they cover elections the reports on actions of
candidates appear in the weeks beforehand. For detecting
and/or minimizing the news bias under more complex
situations, an alternative strategy for obtaining a
rea5Only signi cant results are shown (p &lt; 0.001).
sonable ground truth concerning news bias might be
to focus on credibility aspects and to target the
recommendation of citations to clearly and formally stated
facts and/or events, such as ones in existing knowledge
bases.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions and Future Works</title>
      <p>Detecting news bias is a challenging task for computer
science as well as linguistics and media research areas
due to the subtle nature and heterogeneous, diverse
kinds of biases. In this paper, we set up a
crowdsourcing task to annotate news articles concerning
biasinducing words. We then analyzed features concerning
the annotated words based on di erent user agreement
levels. Based on the results, we make the following
conclusions:
1. Generally, it is hard to reach an agreement among
users concerning biased words or sentences.
2. According to results, it is reasonable to focus on
linguistic features, such as negative words,
negative subjective words, etc. for detecting bias on
a word level. This also means that for
detecting bias, capturing the context, such as having
semantically-structured representations of
statements or sentences might not be needed for a
shallow bias detection.
3. Our experiments on the characteristics of
biasinducing words indicate that presenting the
readers with bias-inducing words (e.g., by highlighting
them in the text) is still worthwhile to be pursued
in the future.
4. A deeper analysis of bias in the news is needed.</p>
      <p>Current e orts, such as the SemEval 2019 Task 4
(\Hyperpartisan News Detection")6, can be seen
as rst steps in this direction. More generally, we
argue that we need novel ways to measure the
actual bias of news (and other texts). This could be
6https://pan.webis.de/semeval19/semeval19-web/
achieved by measuring the e ect of article
reading by not only asking readers before and after
the reading about their opinion on topic/event,
but also by correlating the read news with
actions, such as the votes of readers in upcoming
elections.</p>
      <p>Acknowledgments This research was supported
in part by MEXT grants (#17H01828; #18K19841;
#18H03243).
[ADS17]
[Ben16]
[BES10]
[DA00]</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>[ABHK08] Karel Jan</surname>
            <given-names>Alsem</given-names>
          </string-name>
          , Steven Brakman, Lex Hoogduin, and
          <string-name>
            <given-names>Gerard</given-names>
            <surname>Kuper</surname>
          </string-name>
          .
          <article-title>The impact of newspapers on consumer con - dence: does spin bias exist?</article-title>
          <source>Applied Economics</source>
          ,
          <volume>40</volume>
          (
          <issue>5</issue>
          ):
          <volume>531</volume>
          {
          <fpage>539</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [ACG+12]
          <string-name>
            <surname>Jisun</surname>
            <given-names>An</given-names>
          </string-name>
          , Meeyoung Cha, Krishna P Gummadi, Jon Crowcroft, and
          <string-name>
            <given-names>Daniele</given-names>
            <surname>Quercia</surname>
          </string-name>
          .
          <article-title>Visualizing media bias through Twitter</article-title>
          .
          <source>In Proc. of ICWSM SocMedNews Workshop</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Hector</given-names>
            <surname>Mart nez Alonso</surname>
          </string-name>
          , Amaury Delamaire, and Beno^t Sagot.
          <article-title>Annotating omission in statement pairs</article-title>
          .
          <source>In Proc. of LAW@EACL</source>
          <year>2017</year>
          , pages
          <fpage>41</fpage>
          {
          <fpage>45</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>W Lance</given-names>
            <surname>Bennett</surname>
          </string-name>
          .
          <article-title>News: The politics of illusion</article-title>
          . University of Chicago Press,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [BEQ+15]
          <string-name>
            <surname>Eric</surname>
            <given-names>Baumer</given-names>
          </string-name>
          , Elisha Elovic, Ying Qin, Francesca Polletta, and
          <string-name>
            <given-names>Geri</given-names>
            <surname>Gay</surname>
          </string-name>
          .
          <article-title>Testing and comparing computational approaches for identifying the language of framing in political news</article-title>
          .
          <source>In Proc. of NAACL HLT</source>
          <year>2015</year>
          , pages
          <fpage>1472</fpage>
          {
          <fpage>1482</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Stefano</given-names>
            <surname>Baccianella</surname>
          </string-name>
          ,
          <source>Andrea Esuli, and Fabrizio Sebastiani. SentiWordNet 3</source>
          .
          <article-title>0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining</article-title>
          .
          <source>In Proc of LREC</source>
          <year>2010</year>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>Journal of communication</source>
          ,
          <volume>50</volume>
          (
          <issue>4</issue>
          ):
          <volume>133</volume>
          {
          <fpage>156</fpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [FMK+10]
          <string-name>
            <surname>Tim</surname>
            <given-names>Finin</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>William</given-names>
            <surname>Murnane</surname>
          </string-name>
          , Anand Karandikar, Nicholas Keller, Justin Martineau, and
          <string-name>
            <given-names>Mark</given-names>
            <surname>Dredze</surname>
          </string-name>
          .
          <article-title>Annotating Named Entities in Twitter Data with Crowdsourcing</article-title>
          .
          <source>In Proc. of CSLDAMT'10</source>
          , pages
          <fpage>80</fpage>
          {
          <fpage>88</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [GS06]
          <article-title>Tim Groseclose and Je rey Milyo. A measure of media bias</article-title>
          .
          <source>The Quarterly Journal of Economics</source>
          ,
          <volume>120</volume>
          (
          <issue>4</issue>
          ):
          <volume>1191</volume>
          {
          <fpage>1237</fpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <article-title>Media bias and reputation</article-title>
          .
          <source>Journal of political Economy</source>
          ,
          <volume>114</volume>
          (
          <issue>2</issue>
          ):
          <volume>280</volume>
          {
          <fpage>316</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [HMG17]
          <string-name>
            <given-names>Felix</given-names>
            <surname>Hamborg</surname>
          </string-name>
          , Norman Meuschke, and
          <string-name>
            <given-names>Bela</given-names>
            <surname>Gipp</surname>
          </string-name>
          .
          <article-title>Matrix-Based News Aggregation: Exploring Di erent News Perspectives</article-title>
          .
          <source>In Proc. of JCDL</source>
          <year>2017</year>
          , pages
          <fpage>69</fpage>
          {
          <fpage>78</fpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [NR10]
          <string-name>
            <given-names>Stefanie</given-names>
            <surname>Nowak and Stefan M. Ru</surname>
          </string-name>
          <article-title>ger. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation</article-title>
          .
          <source>In Proc.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>of MIR</source>
          <year>2010</year>
          , pages
          <fpage>557</fpage>
          {
          <fpage>566</fpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [OMY11]
          <string-name>
            <given-names>Tatsuya</given-names>
            <surname>Ogawa</surname>
          </string-name>
          , Qiang Ma, and
          <string-name>
            <given-names>Masatoshi</given-names>
            <surname>Yoshikawa</surname>
          </string-name>
          .
          <article-title>News bias analysis based on stakeholder mining</article-title>
          .
          <source>IEICE Transactions</source>
          , 94-D(3):
          <volume>578</volume>
          {
          <fpage>586</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [PKCS09]
          <string-name>
            <given-names>Souneil</given-names>
            <surname>Park</surname>
          </string-name>
          , Seungwoo Kang, Sangyoung Chung, and Junehwa Song.
          <article-title>NewsCube: delivering multiple aspects of news to mitigate media bias</article-title>
          .
          <source>In Proc. of SIGCHI on Human Factors in Computing Systems</source>
          , pages
          <fpage>443</fpage>
          {
          <fpage>452</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [RDJ13]
          <string-name>
            <given-names>Marta</given-names>
            <surname>Recasens</surname>
          </string-name>
          ,
          <string-name>
            <surname>Cristian DanescuNiculescu-Mizil</surname>
          </string-name>
          , and Dan Jurafsky.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <article-title>Linguistic models for analyzing and detecting biased language</article-title>
          .
          <source>In Proc. of ACL</source>
          <year>2013</year>
          , volume
          <volume>1</volume>
          , pages
          <fpage>1650</fpage>
          {
          <fpage>1659</fpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [TKMS03]
          <string-name>
            <given-names>Kristina</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Dan Klein,
          <string-name>
            <given-names>Christopher D.</given-names>
            <surname>Manning</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Yoram</given-names>
            <surname>Singer</surname>
          </string-name>
          .
          <article-title>Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network</article-title>
          .
          <source>In Proc. of HLT-NAACL</source>
          <year>2003</year>
          , pages
          <fpage>173</fpage>
          {
          <fpage>180</fpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [ZLP+15]
          <string-name>
            <surname>Arkaitz</surname>
            <given-names>Zubiaga</given-names>
          </string-name>
          , Maria Liakata, Rob Procter, Kalina Bontcheva, and
          <string-name>
            <given-names>Peter</given-names>
            <surname>Tolmie</surname>
          </string-name>
          .
          <article-title>Crowdsourcing the annotation of rumourous conversations in social media</article-title>
          .
          <source>In Proc. of WWW</source>
          <year>2015</year>
          , pages
          <fpage>347</fpage>
          {
          <fpage>353</fpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>