<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>CaptureBias: Supporting Media Scholars with Ambiguity-Aware Bias Representation for News Videos</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Markus de Jong</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Panagiotis Mavridis</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lora Aroyo</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Bozzon</string-name>
          <email>a.bozzong@tudelft.nl</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jesse de Vos</string-name>
          <email>jdvosg@beeldengeluid.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Johan Oomen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Antoaneta Dimitrova</string-name>
          <email>a.l.dimitrova@fgga.leidenuniv.nl</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alec Badenoch</string-name>
          <email>A.W.Badenoch@uu.nl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vrije Universiteit Amsterdam</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>User-Centric Data Science Group</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Beel en Geluid</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Leiden University</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Utrecht University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this project we explore the presence of ambiguity in textual and visual media and its in uence on accurately understanding and capturing bias in news. We study this topic in the context of supporting media scholars and social scientists in their media analysis. Our focus lies on racial and gender bias as well as framing and the comparison of their manifestation across modalities, cultures and languages. In this paper we lay out a human in the loop approach to investigate the role of ambiguity in detection and interpretation of bias.</p>
      </abstract>
      <kwd-group>
        <kwd>Bias detection</kwd>
        <kwd>bias in news video</kwd>
        <kwd>les</kwd>
        <kwd>ambiguity-aware bias representation</kwd>
        <kwd>disagreement</kwd>
        <kwd>machine learning</kwd>
        <kwd>crowdsourcing human in the loop</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The interpretation of textual and visual media is typically a subjective process
where personal views and biases are becoming interlaced with and
indistinguishable from the actual media content. For example, ethnic groups can be
misrepresented by numbers in crime reports [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and international news agencies can
adjust the contents of their reports to tap into certain biases that they believe
are present in the intended public [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. So, the di erent points of view typically
get expressed as a disagreement among di erent authors and consumers of the
media content. The disagreement can be seen as a signal to identify the presence
of ambiguity and has an e ect on the detection of bias in visual and textual
media, as well as on the understanding the meaning of the media message.
      </p>
      <p>
        Studies of visual and textual media bias can be quite labor-intensive when
performed manually [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], e.g. through labeling manually hundreds of hours of
video [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. With the exponential growth of visual (news) content, many machine
learning and human computation approaches are emerging for the automation
of the labeling, analysis and processing of video and textual material. In this
work, we aim at further extending the state of the art for large-scale
processing of textual and visual media to support media professionals, humanities and
social science scholars in their process of analyzing news media (with respect
to studying framing, gender and racial bias in news). The central point here is
the study of content and semantic ambiguity when it comes to determining the
topic, the events and the sentiment of the media material. Further, we aim to
understand what causes this ambiguity, what are di erent types of ambiguity
and how they in uence the understanding and the capturing of bias in visual
and textual media across di erent languages.
      </p>
      <p>The concrete objectives of this research are to support typical digital
humanities analysis tasks, e.g.</p>
      <p>{ distant reading of large collections of visual and textual news for
understanding patterns and contexts framing, racial and gender bias in news over time
and across di erent cultures and languages
{ close reading of speci c instances of visual media for understanding aspects,
properties and causes of framing, racial and gender bias in news over time
and across di erent cultures and languages.</p>
      <p>Therefore, we investigate the role of ambiguity of the media content, as well
as the ambiguity of the topic(s), context(s) and speci c event(s) and entities
depicted in the news media for the detection of framing, racial and gender bias.
Our research is guided by the following hypotheses:
{ There are di erent causes for disagreement in interpretation of visual media
that will lead to di erent types of ambiguity;
{ Ambiguity found in visual media can be related to subjectivity;
{ Di erent types of ambiguity and subjectivity can be used to detect di erent
types of biases, such as framing, racial bias and gender bias.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related work</title>
      <p>
        Here we present the related work on disagreement and ambiguity that occurs
after annotation tasks. As mentioned, disagreement is a signal for ambiguity or
subjectivity. Then ambiguity itself can also be a sign of subjectivity. Then these
signals appear in the di erent manifestations of bias through misrepresentation
of entities with the method of framing [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] or with di erent sentiments attached
to these entities. Some of the entities that contain gender and race can also often
be misrepresented [
        <xref ref-type="bibr" rid="ref10 ref18">18, 10</xref>
        ]. In the following we present the work that is related
to the the detection of the above signals and bias manifestations.
Methods that study or leverage the disagreement in order to identify the
quality of annotations done by a crowd exist. For instance, in computational
linguistics [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] use Generalizability theory as a means to capture the reliability
of an annotation and identify the reasons behind the level of con dence and
reliability we can have over an annotation. In [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] they also use crowdsourcing
for annotations and identify di erent subgroups of disagreement between
crowdworkers for annotations and compare them with expert annotations. Also [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]
propose a di erent measure for agreement that solves a number of problems
that arise when other agreement measures are used for interval values. Instead
they propose to reason about the type of agreement or disagreement by looking
into the distribution of answers within an interval of values when suitable for
the problem. On the other hand, [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] identify also disagreement and divergence
into groups of coders and evaluate two tree based ranking metrics to compare
disagreements.
      </p>
      <p>
        Crowdtruth is a platform [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] that applies disagreement analytics to generate
ground truth data with the use of crowdsourcing. It has been used to identify and
name entities as well as determine annotation ambiguity [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], to detect language
ambiguity in medical relations in texts [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and to determine intrinsic ambiguity
of events in video event detection [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Another automated method that uses the
crowd predicts the ambiguity of images to assist in an crowdbased foreground
object segmentation task [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>Now, we take a look at the types of bias we are interested in: framing, racial
bias and gender bias. We give a short de nition of these, followed by related
research methods for those biases.</p>
      <p>
        A frame of a message can be described as 'highlighting some bits of
information about an item that is the subject of communication, thereby elevating
them in salience' [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], and the act of framing can be described as 'selecting and
highlighting some features of reality while omitting others' [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. For research
purposes, it is therefore important to nd the amount of attention that is given
to a certain element (e.g. highlighting or downplaying) and what is omitted.
      </p>
      <p>Gender and racial bias in media is most often investigated via certain
misrepresentations and presentations of groups. An example of misrepresentation
is when the number of group X shown on screen is not representative of the
number of group X that are part of that society. An example of di erence in
presentation is when group X is presented or described in an di erent manner,
e.g. shown in di erent sentiment than group Y or described with di erent
adjectives, or when the focus lies on di erent properties of the groups. Therefore,
the goals for investigating gender and racial bias here are (1) quantitative
comparison with population statistics for misrepresentation, and (2) the rather more
complex qualitative comparison between groups of the representation.</p>
      <p>
        Framing can be investigated through manual thematic analysis [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
However, automated methods also exist such as using keyword clustering to identify
stakeholders standing on di erent sides [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Word-based quantitative text
analysis and computer assisted methods have also been used, e.g. to identify interest
group frames in the framing of environmental policy in the EU [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In the case
of framing in video, we mentioned the investigation into framing in TV-news in
countries that lie in overlapping spheres of in uence of Russia and the EU [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
namely Belarus, Moldavia and Ukraine. In that study, 607 video news emissions
were manually labeled on subject (EU, Russia), tone (positive, negative,
neutral, none), theme (e.g. culture, history, security, values) and topic (e.g. external
events or developments, human interest stories, visit from a state o cial). The
relative number of reports on either EU or Russia was also compared. The
results included statistics that showed di erent news channels aimed at particular
local preferences (e.g. a shared religion, a shared history), but that (apart from
the Russian channels) the news was in general most often balanced and neutral
in tone and did not di er in tone towards either the EU or Russia.
      </p>
      <p>
        As mentioned, research can discover racial bias expressed by discrepancies
between actual on-screen role representation of ethnic groups and data from
o cial statistics [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Example results from this 2017 investigation performed in
Los Angeles showed that blacks were correctly reported as perpetrators, victims
and police o cers, and, while Latinos were accurately reported as perpetrators,
they were underreported as victims and police o cers. Whites were signi cantly
overrepresented in all three categories. A similar quantitative comparison can be
carried out to investigate gender bias, e.g. to investigate balanced reporting in
sports [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. This research also included qualitative research in which raters were
asked to label announcer's language usage in relation to the athlete's gender (e.g.
appearance, marital status) and imagery (e.g. active vs non-active pose, sports vs
non-sports context). The researchers reported no signi cant quantitative gender
bias, although there were still some di erences found on other criteria. In other
work, gender bias in Dutch newspapers expressed by stereotypical representation
of male vs. female leadership in politicians was investigated with a dictionary
approach [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        To investigate framing and other biases, it is important to determine
differences in message sentiment. Some automated text sentiment tools have been
developed [
        <xref ref-type="bibr" rid="ref20 ref7">20, 7</xref>
        ] which are based on natural language processing (NLP). Voice
tone is another possible source of sentiment analysis [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. A relatively new
modality in sentiment analysis is video, in which facial recognition techniques used to
analyses actor's facial expression ('facial a ect') [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. Some work has also been
done on creating an ensemble of all these sentiment analysis methods [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
      <p>
        The methods put forward to analyze framing, gender and racial bias, however,
do not make use of ambiguity in the crowd, even when such subjectivity may give
us valuable information that could lead us to better detect bias and create better
labels on subjective aspects as sentiment. Therefore, we propose an
ambiguityaware method that builds on CrowdTruth methodology [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] that will make use
of ambiguity in the crowd to better detect bias.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>The Approach: Disagreement-based Ambiguity for Bias</title>
    </sec>
    <sec id="sec-4">
      <title>Detection</title>
      <p>
        We perform a number of knowledge acquisition experiments with media scholars
and social scientists to determine aspects of bias in di erent modalities,
cultures and languages. Next to this we also study ambiguity expressions, causes
and types through crowdsourcing experiments for annotation of sentiment,
topics, and opinions in news videos and articles. Main focus here is to understand
(1) how disagreement is manifested as a signal for ambiguity, and (2) how
ambiguity is related to subjectivity, and ultimately how these two lead to more
accurate representation of bias in video and textual news. For this we apply,
adapt and extend the CrowdTruth approach [
        <xref ref-type="bibr" rid="ref16 ref2 ref3">3, 2, 16</xref>
        ], which has been used to
study disagreement-based ambiguity in various domains. We employ a hybrid
human-machine system, where basic processing of both video and text material
is performed to be used as a seed for the human computation tasks. Considering
the large amount of video and text articles involved we envision an active learning
cycle, where machine learning components continuously learn from
humans-inthe-loop.
3.1
      </p>
      <sec id="sec-4-1">
        <title>Dataset</title>
        <p>Next, we describe the two types of data that we use and compare in our datasets:
(1) textual and (2) video data.</p>
        <p>Textual dataset Our textual dataset consists of news articles written in
English from online sources such as: e.g. BBC, The Guardian, CNN, Fox News,
The New York Times, The Moscow Times, Sputnik, Breitbart News. To identify
target news events to study in videos, we use Wikipedia pages focusing on
historical and political events 6. Wikipedia provides crowd-sourced and editor-vetted
articles from di erent contributors. We aim to extract event names and related
event entities, e.g. people, organizations, locations and times and compare their
representation in terms of opinions, perspectives and sentiment ground truth to
compare the entities and facts presented within between di erent news sources.</p>
        <p>Video dataset We perform experiments with a video dataset of short English
language newsreels (i.e. a few minutes long with a spoken dialogue),
accompanied by their metadata, e.g. short video description, title, tags, (auto-generated)
subtitles and user comments. The videos in this dataset are collected from the
following online news channels: e.g. CNN, BBC, Al Jazeera, Sputnik, RT
(formerly Russia Today), France24. We also take advantage of the keyword
annotated datasets on videos provided by YouTube in the YouTube8m dataset7.
6 Wikipedia: www.wikipedia.com
7 YouTube-8M Dataset: https://research.google.com/youtube8m/
3.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Data Preprocessing</title>
        <p>We enrich the subtitles, transcripts, in-video text and video metadata with the
set of events and related entities extracted from relevant Wikipedia pages and
news articles.</p>
        <p>
          Ambiguity signals in dataset We want to capture the di erent ambiguities
from the dataset itself. For instance, using ControCurator8 we process the
comments from Wikipedia pages and YouTube videos from users in order to capture
possible controversies. Also, for Wikipedia we can use a method similar to [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]
in order to nd controversial news articles from Wikipedia or Contropedia9.
        </p>
        <p>News event detection and data gathering After nding possible bias
candidates with the use of the above tools from Wikipedia pages, we extract events
using NLP processing. When Wikipedia articles are not present (for instance in
the case of very recent news) we use di erent news article sources for the event
and also make use of an initial video input from one source directly. We also use
controversial video comments from these events, and, supported by Wordnet 10,
we create seed words to assist a crowd to annotate an event. When the events
are identi ed, we can collect video data from the di erent video channels of our
initial dataset.
3.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Disagreement for Bias Cues Extraction</title>
        <p>In order to identify the framing, gender and racial bias introduced in news videos,
we compare the information gathered from the video with Wikipedia and
newspaper texts, as well as other videos (e.g. from other channels). When we are able
to determine which main entities are related to an event, we can detect
misrepresentions (of e.g. facts, actors) that might indicate framing. If a particular
gender or race is misrepresented or represented in a certain way, we can infer
gender and racial bias. As said, we base our bias cues on disagreement in both
automatically extracted information and the crowd.</p>
        <p>To be speci c, order to be able to annotate videos for their events, we want
to extract particular cues with both machine learning and human computation.
Ideally, we want to identify with machine learning what needs to be annotated
in the videos and transcripts by humans in order to nd out e.g. what is being
said, who is reporting, who is talking, how long are they talking, are they present
at the scene of the news event?</p>
        <p>
          To make use of all data modalities in our news videos, we investigate
combining existing API's for textual, voice- and face-based sentiment analysis [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]
in relation to the entities. Also, to be able to attach the entities to particular
sentiments [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], we can compare di erent API's and state of the art methods and
use their \disagreement" as a way to give a con dence to the combined output
8 ControCurator: Crowds and Machines for Modeling and Discovering
Controversyhttp://controcurator.org/
9 Contropedia: Analysis and visualization of controversies within Wikipedia
articleshttp://contropedia.net/
10 Wordnet: wordnet.princeton.edu/
and apply human computation to validate the sentiment analysis output from
the machine learning methods. CrowdTruth11 can be used to reason about the
disagreement of the various subjects. Given that the crowd can also disagree for
a particular subject, we investigate the reasons why the crowd could interpret a
given message di erently with regards to, for instance, their demographics.
4
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Discussion</title>
      <p>One of the limitations of our proposal is the lack of reliable data to capture
'opinion' neutral de nition of recent events. As we use Wikipedia pages to
extract both ground truth events to seed the search of these in media, as well as
the intensity of edits and changes to these pages as an indication of possible
controversy / bias or variety of opinions.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>This research is supported by the Capture Bias project 12, part of the VWData
Research Programme funded by the Startimpuls programme of the Dutch
National Research Agenda, route "Value Creation through Responsible Access to
and use of Big Data" (NWO 400.17.605/4174).
11 CrowdTruth: The</p>
      <p>http://crowdtruth.org/
12 https://capturebias.eu/</p>
      <p>Framework
for</p>
      <p>Crowdsourcing</p>
      <p>Ground</p>
      <p>Truth</p>
      <p>Data</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Aaldering</surname>
          </string-name>
          , L.,
          <string-name>
            <surname>Van Der Pas</surname>
          </string-name>
          , D.J.:
          <article-title>Political leadership in the media: Gender bias in leader stereotypes during campaign and routine times</article-title>
          .
          <source>British Journal of Political</source>
          Science p.
          <volume>121</volume>
          (
          <year>2018</year>
          ). https://doi.org/10.1017/S0007123417000795
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Aroyo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welty</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The three sides of crowdtruth</article-title>
          .
          <source>Journal of Human Computation</source>
          <volume>1</volume>
          ,
          <issue>31</issue>
          {
          <fpage>34</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Aroyo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welty</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Truth Is a Lie: CrowdTruth and the Seven Myths of Human Annotation</article-title>
          .
          <source>AI Magazine</source>
          <volume>36</volume>
          (
          <issue>1</issue>
          ),
          <volume>15</volume>
          {
          <fpage>24</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bayerl</surname>
            ,
            <given-names>P.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paul</surname>
            ,
            <given-names>K.I.</given-names>
          </string-name>
          :
          <article-title>Identifying sources of disagreement: Generalizability theory in manual annotation studies</article-title>
          .
          <source>Comput. Linguist</source>
          .
          <volume>33</volume>
          (
          <issue>1</issue>
          ), 3{8 (Mar
          <year>2007</year>
          ). https://doi.org/10.1162/coli.
          <year>2007</year>
          .
          <volume>33</volume>
          .
          <issue>1</issue>
          .3, http://dx.doi.org/10.1162/coli.
          <year>2007</year>
          .
          <volume>33</volume>
          .
          <issue>1</issue>
          .
          <fpage>3</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. Borang,
          <string-name>
            <given-names>F.</given-names>
            ,
            <surname>Eising</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            , Kluver, H.,
            <surname>Mahoney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Naurin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Rasch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Rozbicka</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Identifying frames: A comparison of research methods</article-title>
          .
          <source>Interest Groups &amp; Advocacy</source>
          <volume>3</volume>
          (
          <issue>2</issue>
          ),
          <volume>188</volume>
          {
          <fpage>201</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Calais</given-names>
            <surname>Guerra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.H.</given-names>
            ,
            <surname>Veloso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Meira</surname>
          </string-name>
          , Jr.,
          <string-name>
            <given-names>W.</given-names>
            ,
            <surname>Almeida</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          :
          <article-title>From bias to opinion: A transfer-learning approach to real-time sentiment analysis</article-title>
          .
          <source>In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</source>
          . pp.
          <volume>150</volume>
          {
          <fpage>158</fpage>
          . KDD '11,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2011</year>
          ). https://doi.org/10.1145/2020408.2020438, http://doi.acm.
          <source>org/10</source>
          .1145/2020408.2020438
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Chaumartin</surname>
            ,
            <given-names>F.R.:</given-names>
          </string-name>
          <article-title>Upar7: A knowledge-based system for headline sentiment tagging</article-title>
          .
          <source>In: Proceedings of the 4th International Workshop on Semantic Evaluations</source>
          . pp.
          <volume>422</volume>
          {
          <fpage>425</fpage>
          .
          <article-title>Association for Computational Linguistics (</article-title>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Checco</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roitero</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maddalena</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mizzaro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Demartini</surname>
          </string-name>
          , G.:
          <article-title>Let's agree to disagree: Fixing agreement measures for crowdsourcing</article-title>
          (
          <year>October 2017</year>
          ), http://eprints.whiterose.ac.uk/122865/, c 2017,
          <article-title>Association for the Advancement of Arti cial Intelligence (www</article-title>
          .aaai.org).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Dimitrova</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frear</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mazepus</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Toshkov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boroda</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chulitskaya</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grytsenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Munteanu</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parvan</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramasheuskaya</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>The elements of russias soft power: Channels, tools, and actors promoting russian in uence in the eastern partnership countries (</article-title>
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Dixon</surname>
            ,
            <given-names>T.L.</given-names>
          </string-name>
          :
          <article-title>Good guys are still always in white? positive change and continued misrepresentation of race and crime on local television news</article-title>
          .
          <source>Communication Research</source>
          <volume>44</volume>
          (
          <issue>6</issue>
          ),
          <volume>775</volume>
          {
          <fpage>792</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Dumitrache</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aroyo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welty</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Crowdsourcing ground truth for medical relation extraction</article-title>
          .
          <source>arXiv preprint arXiv:1701.02185</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Entman</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          : Framing:
          <article-title>Toward clari cation of a fractured paradigm</article-title>
          .
          <source>Journal of communication 43(4)</source>
          ,
          <volume>51</volume>
          {
          <fpage>58</fpage>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Gurari</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiong</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Sameki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jain</surname>
            ,
            <given-names>S.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sclaro</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Betke</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grauman</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Predicting foreground object ambiguity and e ciently crowdsourcing the segmentation (s)</article-title>
          .
          <source>International Journal of Computer Vision</source>
          <volume>126</volume>
          (
          <issue>7</issue>
          ),
          <volume>714</volume>
          {
          <fpage>730</fpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>IEPSMA</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , GEVERS,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>INEL</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            ,
            <surname>AROYO</surname>
          </string-name>
          , L.:
          <article-title>Crowdsourcing for video event detection</article-title>
          .
          <source>In: Collective Intelligence</source>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Inel</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aroyo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Harnessing diversity in crowds and machines for better ner performance</article-title>
          .
          <source>In: European Semantic Web Conference</source>
          . pp.
          <volume>289</volume>
          {
          <fpage>304</fpage>
          . Springer (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Inel</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khamkham</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cristea</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dumitrache</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rutjes</surname>
          </string-name>
          , A.,
          <string-name>
            <surname>van der Ploeg</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Romaszko</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aroyo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sips</surname>
          </string-name>
          , R.J.: Crowdtruth:
          <article-title>Machine-human computation framework for sing disagreement in gathering annotated data</article-title>
          .
          <source>In: The Semantic Web{ISWC</source>
          <year>2014</year>
          , pp.
          <volume>486</volume>
          {
          <fpage>504</fpage>
          . Springer (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Kairam</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heer</surname>
          </string-name>
          , J.:
          <article-title>Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks</article-title>
          .
          <source>In: Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work &amp; Social Computing</source>
          . pp.
          <volume>1637</volume>
          {
          <fpage>1648</fpage>
          . CSCW '16,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2016</year>
          ). https://doi.org/10.1145/2818048.2820016, http://doi.acm.
          <source>org/10</source>
          .1145/2818048.2820016
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Kinnick</surname>
            ,
            <given-names>K.N.</given-names>
          </string-name>
          :
          <article-title>Gender bias in newspaper pro les of 1996 olympic athletes: A content analysis of ve major dailies</article-title>
          .
          <source>Women's Studies in Communication</source>
          <volume>21</volume>
          (
          <issue>2</issue>
          ),
          <volume>212</volume>
          {
          <fpage>237</fpage>
          (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>M.M.:</given-names>
          </string-name>
          <article-title>Frame mapping and analysis of news coverage of contentious issues</article-title>
          .
          <source>Social science computer review 15(4)</source>
          ,
          <volume>367</volume>
          {
          <fpage>378</fpage>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Pang</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts</article-title>
          .
          <source>In: Proceedings of the 42nd annual meeting on Association for Computational Linguistics</source>
          . p.
          <fpage>271</fpage>
          .
          <article-title>Association for Computational Linguistics (</article-title>
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Philo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Briant</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Donald</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Bad news for refugees</article-title>
          . Pluto Press (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Poria</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hussain</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cambria</surname>
          </string-name>
          , E.:
          <article-title>Ensemble application of convolutional neural networks and multiple kernel learning for multimodal sentiment analysis</article-title>
          .
          <source>Neurocomputing</source>
          <volume>261</volume>
          ,
          <issue>217</issue>
          {
          <fpage>230</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Rad</surname>
            ,
            <given-names>H.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barbosa</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Identifying controversial articles in wikipedia: A comparative study</article-title>
          .
          <source>In: Proceedings of the Eighth Annual International Symposium on Wikis and Open Collaboration</source>
          . pp.
          <volume>7</volume>
          :
          <issue>1</issue>
          {7:
          <fpage>10</fpage>
          . WikiSym '12,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2012</year>
          ). https://doi.org/10.1145/2462932.2462942, http://doi.acm.
          <source>org/10</source>
          .1145/2462932.2462942
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Sariyanidi</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gunes</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cavallaro</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Automatic analysis of facial a ect: A survey of registration, representation, and recognition</article-title>
          . vol.
          <volume>37</volume>
          , pp.
          <volume>1113</volume>
          {
          <fpage>1133</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Zade</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Drouhard</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chinh</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gan</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aragon</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Conceptualizing disagreement in qualitative coding</article-title>
          .
          <source>In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems</source>
          . pp.
          <volume>159</volume>
          :
          <issue>1</issue>
          {
          <fpage>159</fpage>
          :
          <fpage>11</fpage>
          . CHI '18,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA (
          <year>2018</year>
          ). https://doi.org/10.1145/3173574.3173733, http://doi.acm.
          <source>org/10</source>
          .1145/3173574.3173733
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jia</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dong</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lei</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Inferring emotion from conversational voice data: A semi-supervised multi-path generative neural network approach (</article-title>
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>