<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Introducing Online Pro le Learning in Crowdsourcing Task Routing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Discussion Paper</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Silvana Castano</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universita degli Studi di Milano DI - Via Comelico</institution>
          ,
          <addr-line>39 - 20135 Milano</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>24</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>In this paper, we present an initial implementation with experimental results of online pro le learning in Argo+, a framework for crowdsourcing task routing characterized by i) feature-based representation of both tasks and workers, and ii) learning techniques inspired to Rocchio relevance feedback for prediction of the most appropriate task to execute by a given worker.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        In the recent years, the crowdsourcing philosophy has gained a lot of
attention and many crowdsourcing systems/platforms appeared on the web scene for
satisfying the growing need of marketplaces where the o er of requesters
providing jobs to execute can meet the work-force provided by the crowd. Humans
have di erent knowledge and abilities, thus a crowd worker can be trustworthy
on a certain task campaign that is coherent with her/his attitudes, as well as
she/he can be inaccurate on another campaign with di erent topics and skill
requirements not compliant with her/his attitudes. As a result, the capability
to e ectively discover and represent the pro le of engaged crowd workers is
becoming a strategic asset of emerging crowdsourcing marketplaces. The goal is to
selectively choose a quali ed and motivated crowd to recruit/involve in a given
campaign according to the required knowledge/abilities based on the features of
the tasks to execute. In this direction, the use of machine-learning techniques in
crowdsourcing applications is being appearing in the recent literature for
mining emerging worker skills from the analysis of executed tasks [
        <xref ref-type="bibr" rid="ref12 ref6 ref7">6, 7, 12</xref>
        ]. Online
learning techniques based on (multi-armed) bandits algorithms have been also
proposed for improving the quality of crowdsourcing results [
        <xref ref-type="bibr" rid="ref10 ref14 ref8">8, 10, 14</xref>
        ]. Online
learning of worker skills is also concerned with the so-called task routing issue,
that is the capability of a crowdsourcing system to assign a task to a worker
based on the expectations to obtain a successful contribution from her/his
answer. In the literature, popular solutions are characterized by the idea to rely on
human factors for addressing task routing (e.g., [
        <xref ref-type="bibr" rid="ref1 ref9">1, 9</xref>
        ]).
      </p>
      <p>
        In this paper, we present an initial implementation of online pro le learning
in Argo+, a framework for crowdsourcing task routing characterized by i)
featurebased representation of both tasks and workers, and ii) learning techniques
inspired to Rocchio relevance feedback for prediction of the most appropriate task
to execute by a given worker. A detailed description of the Argo+ framework is
provided in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In the following, we focus on discussing the preliminary results
obtained on a real crowdsourcing campaign, by comparing the performance of
Argo+ against a baseline with conventional task routing techniques.
      </p>
      <p>The paper il organized as follows. Section 2 presents the basic elements of
the proposed Argo+ implementation. Experimental results are then discussed in
Section 3. Concluding remarks are nally provided in Section 4.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Learning-based task routing</title>
      <p>A crowdsourcing campaign is characterized by a crowd of workers W = fw1; : : : ; wkg
involved in the execution of a set of tasks T = ft1; : : : ; tng. A task t 2 T is
dened as t = hidt; at; mt; dt; Fti, where idt is the unique task identi er, at is the
task action, mt is the task modality, dt is the task description, and Ft is the set of
task-features. A task action at denotes the task target, namely the goal that needs
to be satis ed through crowd execution (e.g., picture labeling, movie recognition,
sentiment evaluation). A modality mt represents the kind of worker answer required
in task execution (e.g., creation, decision). A description dt represents the task
request given to each worker for illustrating what is demanded to her/him in
the task execution. A set of task-features Ft manually associated with the task
for providing a description of task requirements, namely a speci cation of the
capabilities expected from a worker for being involved in the execution of the
task t. For each feature f 2 Ft, a task-feature weight !(f ) is associated to denote
the relevance of f within the task-features Ft. A worker w 2 W is de ned as
w = hidw; Fwi, where idw is the unique worker identi er and Fw is the worker
pro le expressed as a set of worker-features. A worker-feature f 2 Fw denotes a
worker capability, either knowledge expertise or skill, and it is associated with a
worker-feature weight !(f ) denoting the \degree" of expertise/ability associated
with the worker.
2.1</p>
      <p>
        Assigning tasks to workers
For enforcing task routing, Argo+ relies on a task classi cation procedure for
aggregating the tasks T to execute into K classes, so that tasks with similar
features Ft are associated with a same class. In the proposed implementation
of Argo+, probabilistic topic modeling are exploited for task classi cation. The
choice is motivated by the need to enforce a soft aggregation mechanism, where
a task with a plurality of features can have multiple associated classes and it can
be exploited by workers with di erent expertise, each one focused on a di erent
class. In particular, the proposed solution is characterized by the use of
Latent Dirichlet Allocation (LDA) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] over the task-features, characterized by two
discrete probability distributions, namely and . describes the probability
distribution of task-features on classes. In particular, k denotes the probability
of each task-feature f of being associated with the kth class on the K possible
classes. describes the probability distribution of classes on tasks. In
particular, t denotes the probability of the task t of belonging to each class k among
the K possible classes. Finally, we denote tk the probability of the task t to be
associated with the class k. The choice of K, namely the number of classes on
which LDA works for task classi cation, is a con guration parameter and it is
discussed in Section 3.
      </p>
      <p>
        Consider a worker w and associated worker pro le Fw. When w asks for a task
t to execute, the probability distributions ( ; ) created by task classi cation are
exploited. Through , Argo+ calculates the maximum a posteriori estimation w
given the worker features Fw. This is done by using collapsed Gibbs sampling [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
to learn the latent assignment of features to classes given the observed features
Fw. In particular, we repeatedly estimate the probability p(f j k) of a feature
f to be assigned to a class k and we exploit this to estimate the probability
p(k j w) of the class k to be the correct assignment for the worker w. This
sampling process is repeated until convergence, so that for each class k 2 K we
nally estimate:
k
w /
      </p>
      <p>P !(f )k
f2Fw</p>
      <p>P P !(f )j
f2Fw j2K
;
(1)
(3)
where !(f )i denotes the weight of features of type f that have been assigned to
class i. Then, from the distribution w, we select the class z and task t such that:
z = arg max wz:
z2K
(2)
t = arg max tz:
t2T
We stress that a task t is available for assignment until the number of task
executions expected by the system is reached, then it is marked as nished and
it is excluded from the assignment mechanism.</p>
      <p>Example 1. Consider to enforce Argo+ on a system with task classi cation based
on K = 10 and a set of thematic tags used as features both for tasks and
workers. A worker w asks for a task to execute and the pro le Fw is de ned by
the following features:</p>
      <p>Fw = h (web search, 0.85), (classi cation, 0.85), (smartphone, 0.51), (text, 0.34), ... i
Starting from Fw, we exploit Equation 1 in order to classify the worker w with
respect to the classes K. The resulting distribution w is:</p>
      <p>From w, we exploit Equation 2 to select the most relevant class for the worker
pro le, that is k = 2. The top-3 features associated with k = 2 in 2 are:
classi cation, tweets, and web search, which motivates the relevance of the class with
respect to the worker pro le Fw. Given the class, it is now possible to exploit
Equation 3 in order to select a task t for worker execution. The features Ft of
the task selected for assignment to w are Fw = h (web search, 1.0), (classi cation,
1.0), (smartphone, 1.0)i
2.2</p>
      <p>
        Learning worker pro les
Given a task t executed by a worker w, we need to assess the quality of the
provided worker answer for deciding how to update the worker pro le, and thus
how to enforce learning. We call (t) the nal task result determined by the
crowdsourcing system. We note that di erent solutions can be employed for
determining (t). Popular solutions are based on majority voting mechanisms
where the nal task result corresponds to the answer that obtained the majority
of preferences by the involved workers. Alternative solutions are also possible,
such as for example statistics-based techniques [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. We say that a worker w
provided a successful contribution to the task t when the worker answer coincides
with (or is equivalent to) (t). Otherwise, we say that a worker w provided an
unsuccessful contribution to the task t. According to this, we de ne the
workertask result (w; t) as follows:
(w; t) =
1 if w provided succ: contrib:
0 otherwise
      </p>
      <p>
        For updating a worker pro le, Argo+ relies on learning techniques inspired
to the Rocchio relevance feedback [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. When a worker w executes a task t, we
associate the worker w with a new set of features F w0 = Fw [ Ft. We denote
!(f )w as the weight of the feature f in Fw (possibly being 0 if f was not in Fw)
and !(f )t the weight of feature f in Ft (possibly being 0 if f was not in Ft).
Then, the new weight !(f )0 for each feature in F w0 is updated as follows:
!(f )0 =
!(f )w + (1
)
z
t
(w; t) !(f )t;
(4)
where is a dumping factor in [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] that determines how much of the original
weight of the pro le features contributes to the new weight, and z is the class
chosen for the task assignment. The idea behind pro le update is that when a
worker pro le feature is not included in the task features, its weight is reduced
by a factor . In the other case, the new pro le feature weight !(f )0 is computed
as the weighted sum between the previous pro le feature weight !(f )w and the
task feature weight !(f )t, which contribution is proportional to the relevance
tz of the task t in the class z. The task feature weight !(f )t is forced to be
equal to 0 when the worker does not provide a successful contribution on the
task (resulting in a reduction of the corresponding pro le feature weight).
Example 2. Consider the task assignment of Example 1. The worker w executed
t and (w; t) = 1. We update Fw by applying Equation 4. The updated worker
pro le F w0 is the following (the class-task relevance t2 = 0:77):
      </p>
      <p>Fw = h (web search, 0.78), (classi cation, 0.78), (smartphone, 0.74), (text, 0.03), ... i</p>
      <p>We note that the three features of t a ects the worker pro le by changing
the relative feature weights. Features web search and classi cation remain the most
relevant, but the weight of smartphone that is a feature of t is increased. On the
opposite, the feature text of Fw becomes remarkably less relevant in the new
worker pro le, due to the fact that it is not part of the task feature set Ft. After
the pro le update, Argo+ will exploit the new worker pro le for the subsequent
task assignments to w.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Experimental results</title>
      <p>For evaluation, we present some preliminary experimental results based on the
comparison of the proposed Argo+ implementation against a basic task routing
mechanism called from now on baseline.
3.1</p>
      <p>Experiment setup
Our experiment relies on a crowdsourcing campaign (named paintings) run through
the Argo crowdsourcing system (i.e., the system version implementing baseline
routing) between November and December 2018. The experiment involved 367
students from the Faculty of Arts and Literature at the University of Milan, who
have been asked to examine a dataset of paintings in order to choose for each
painting the correct author among a choice of six possible painters. The paintings
dataset is composed by 948 paintings from 56 di erent authors spanning from
the 13th century to the 20th century. Each task has been executed by more than
one worker, for a total of 8,573 executions. The fact that the paintings dataset is
featured by a correct answer for each task (i.e., the correct name of the painting
author) makes it possible to easily evaluate the e ectiveness of the work done by
each worker (i.e., successful contribution) in terms of number of correct answers
given to the tasks question.</p>
      <p>To setup the experiment for evaluating Argo+, we compare the success rate
of the baseline execution of paintings against the success rate obtained through
two di erent executions of paintings in Argo+: i) one execution with a at worker
pro le (called Argo+nopro le) where !(f ) = 0 is initially de ned for each feature
(i.e., worker-feature), and ii) one execution with a custom worker pro le (called
Argo+pro le) where !(f ) = 1 for each feature on which the worker has declared
a competence. Competences declared by workers have been collected through a
self evaluation questionnaire about knowledge of painters and di erent periods
in the art history. Task and worker features have been taken from Wikidata
(https://www.wikidata.org) and they include the name of the author, the</p>
      <p>WORKER OPTIONS</p>
      <p>Raffaello Sanzio
Gustav Klimt
Piero della Francesca
Francisco Goya
Giotto
Michelangelo Buonarroti</p>
      <p>TASK FEATURES
Raffaello Sanzio
1516
High Renaissance
Portrait paintings of
cardinals</p>
      <p>EXAMPLE OF WORKER ANSWER
{ "gold_answer" : "Q5597",
"argo_answer" : "Raffaello Sanzio",
"worker_answer_id" : "Q5432",
"worker_answer" : "Francisco Goya",
"task_id" : 1102,
"answer_timestamp" : 2017-11-13T14:42:19,
"worker_id" : 527,
"task_refused" : false }
year, and the Wikidata thematic categories available for a painting. An example
of task and worker answer is given in Figure 1.</p>
      <p>The goal of our experimental evaluation is to assess whether Argo+ improves
the success rate with respect to baseline. In order to simulate the execution of
Argo+nopro le and Argo+pro le on exactly the same set of workers and answers
used in baseline, our experiments are based on the idea to change the
timesequence of tasks executed by each worker in baseline according to the assignment
schedule determined by Argo+. We aim at verifying whether the tasks
successfully executed by a worker w are assigned to w before than others. Being the
task answers the same for all the experiments, the overall success rate will be
the same as well. However, if Argo+ performs better than baseline, we expect to
execute correct tasks before. In other terms, we aim at verifying if Argo+ reaches
a success rate better than baseline by taking into account the rst r tasks
assignments. In particular, we call r (request timestamp), the timestamp at which
the crowdsourcing system receives the request for a task to execute by a worker,
and we call (r; ) the success rate of the system execution at the request
timestamp r. A system execution is a stream of task answers, each one collected
from a worker at a certain timestamp. In our evaluation, baseline is the reference
system execution, while Argo+nopro le and Argo+pro le represent alternative
system executions of baseline where the time-sequence of task answers is changed
according to the Argo+ routing mechanism. The success rate (r; ) is de ned
as follows:</p>
      <p>r
(a) (r; ) = 1r iP=1 (w; t)i ; (b) R = R1R
(r; )dr
where (w; t)i in (a) is the worker-task result received by the system at the ith
request timestamp in the execution and (b) measures the overall system
performance of task routing with R representing the overall number of successfully
executed tasks in a system execution (i.e., R is the sum of all the (w; t) = 1
in ). Given two di erent system executions and , the delta value r( ; )
represents how much the success rate of changes with respect to at time r.
The delta value r( ; ) is de ned as r( ; ) = ((rr;; )) .
3.2</p>
      <p>Considerations
Experiments have been performed with a number of classes K = 30 for task
classi cation and a dumping factor = 0:3 for worker pro le learning. The
com1.0
0.8
,)
(r0.6
e
trsscaeuS
c0.4
0.2
0.0
parison of baseline against Argo+nopro le and Argo+pro le on success rate (r; )
and delta value ( ; ) are shown for the rst 200 tasks requests in Figures 2(a)
and 2(b), respectively. We observe that both Argo+noPro le and Argo+Pro le
sucBaseline
Argo+ noprofile
Argo+ profile</p>
      <p>Argo+ noprofile</p>
      <p>Argo+ profile
e
lin6
saeb
e
5
h
tttsceep4
o
itr
h
I,)(w3
e
cn
m2
a
frr
o
e
p
ft1
o
n
e
Inm0 0
e
rc
0
25
50
75 Task request 125 150 175 200
100
25
50
75 Task request 125 150 175 200</p>
      <p>100
(a)
(b)
ceed in improving the success rate of baseline, since successfully executed tasks
are assigned to workers before other tasks in most cases. For the rst 200
requests, the success rate of Argo+Pro le is around 20% better than baseline. It
is also interesting to note that, at the very beginning of the system execution
(r &lt; 50), the behavior of Argo+noPro le and Argo+Pro le is very unstable since
learning has insu cient information for recognizing the appropriate task class for
each worker. However, Argo+ quickly learns the worker pro le (r 50) and this
has a positive impact on the assignment of subsequent tasks. The performance
of Argo+noPro le becomes similar to baseline after the 300th worker request. This
is due to the fact that Argo+ rst selects tasks that are highly relevant for the
worker pro le, while subsequent assignments are about residual tasks of the K
classes for which the relevance for the worker pro le is weaker.</p>
      <p>Finally, we compare baseline and Argo+ through R and we obtain that
R = 399:59 for baseline, R = 424:66 for Argo+Pro le, and R = 399:61 for
Argo+noPro le. As a result, we observe that the use of a questionnaire for
initializing the worker pro le provides the best performance on the three considered
system executions (see also Figure 2(b) on the increment value). However, after a
small number of executions, the performance of the learning system without the
initial set-up of the worker pro le becomes similar to the one of the system
execution initialized with the questionnaire. This con rms the intuition behind the
use of at pro les which argues that the auto-evaluation of worker skill/abilities
could be misplaced with respect to the real worker expertise, and thus sometimes
damaging the performance of the crowdsourcing system.</p>
    </sec>
    <sec id="sec-4">
      <title>Concluding remarks</title>
      <p>
        In this paper, we presented an implementation of pro le learning techniques in
Argo+ crowdsourcing framework [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] with some experimental results. Ongoing
research activities are aimed i) to extend the experimentation for considering
multiple kinds of tasks with di erent action, modality, and description, and ii)
to improve the Argo+ framework to support worker pro le management over
di erent crowdsourcing campaigns with di erent task/worker features.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Amer-Yahia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Roy</surname>
            ,
            <given-names>S.B.</given-names>
          </string-name>
          :
          <article-title>Human Factors in Crowdsourcing</article-title>
          .
          <source>PVLDB</source>
          <volume>9</volume>
          (
          <issue>13</issue>
          ),
          <volume>1615</volume>
          {
          <fpage>1618</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Arun</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suresh</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Madhavan</surname>
            ,
            <given-names>C.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murthy</surname>
            ,
            <given-names>M.N.</given-names>
          </string-name>
          :
          <article-title>On Finding the Natural Number of Topics with Latent Dirichlet Allocation: Some Observations</article-title>
          .
          <source>In: Proc. of the 14th PAKDD Conference</source>
          . Hyderabad,
          <string-name>
            <surname>India</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Blei</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ng</surname>
            ,
            <given-names>A.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jordan</surname>
            ,
            <given-names>M.I.</given-names>
          </string-name>
          :
          <article-title>Latent dirichlet allocation</article-title>
          .
          <source>Journal of machine Learning research 3(Jan)</source>
          ,
          <volume>993</volume>
          {
          <fpage>1022</fpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Castano</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferrara</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montanelli</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A Multi-Dimensional Approach to CrowdConsensus Modeling and Evaluation</article-title>
          .
          <source>In: Proc. of the 34th ER Int. Conference</source>
          . Stockholm, Sweden (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Castano</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferrara</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Montanelli</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A Conceptual Framework for Crowdsourcing Task Assignment with Online Pro le Learning</article-title>
          . In: Submitted to the
          <source>37th ER Int. Conference. Xi'an, China</source>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Gadiraju</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fetahu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kawase</surname>
          </string-name>
          , R.:
          <article-title>Training Workers for Improving Performance in Crowdsourcing Microtasks</article-title>
          .
          <source>In: Proc. of the 10th EC-TEL. Toledo</source>
          ,
          <string-name>
            <surname>Spain</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Goncalves</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feldman</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kostakos</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Task Routing and Assignment in Crowdsourcing Based on Cognitive Abilities</article-title>
          .
          <source>In: Proc. of the 26th WWW Int. Conference</source>
          . Perth,
          <string-name>
            <surname>Australia</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Jain</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narayanaswamy</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Narahari</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>A Multiarmed Bandit Incentive Mechanism for Crowdsourcing Demand Response in Smart Grids</article-title>
          .
          <source>In: Proc. of the 28th AAAI Conference on Arti cial Intelligence</source>
          . pp.
          <volume>721</volume>
          {
          <fpage>727</fpage>
          .
          <string-name>
            <surname>Qulebec</surname>
          </string-name>
          ,
          <string-name>
            <surname>Canada</surname>
          </string-name>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Karger</surname>
            ,
            <given-names>D.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oh</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Budget-Optimal Task Allocation for Reliable Crowdsourcing Systems</article-title>
          .
          <source>Oper. Res</source>
          .
          <volume>62</volume>
          (
          <issue>1</issue>
          ),
          <volume>1</volume>
          {
          <fpage>24</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>M.:</given-names>
          </string-name>
          <article-title>An Online Learning Approach to Improving the Quality of CrowdSourcing</article-title>
          .
          <source>IEEE Transactions on Networking</source>
          <volume>25</volume>
          (
          <issue>4</issue>
          ),
          <volume>2166</volume>
          {
          <fpage>2179</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Manning</surname>
            ,
            <given-names>C.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Raghavan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , Schutze, H.:
          <article-title>Introduction to Information Retrieval</article-title>
          , vol.
          <volume>1</volume>
          . Cambridge university press Cambridge (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Organisciak</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Teevan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dumais</surname>
            ,
            <given-names>S.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalai</surname>
            ,
            <given-names>A.T.</given-names>
          </string-name>
          :
          <article-title>A Crowd of Your Own: Crowdsourcing for On-Demand Personalization</article-title>
          .
          <source>In: Proc. of the 2nd AAAI HCOMP</source>
          . Pittsburgh, USA (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Porteous</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Newman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ihler</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asuncion</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smyth</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welling</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Fast Collapsed Gibbs Sampling for Latent Dirichlet Allocation</article-title>
          .
          <source>In: Proc. of the 14th ACM SIGKDD Int. Conference</source>
          . pp.
          <volume>569</volume>
          {
          <issue>577</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Tran-Thanh</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stein</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rogers</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jennings</surname>
            ,
            <given-names>N.R.:</given-names>
          </string-name>
          <article-title>E cient Crowdsourcing of Unknown Experts using Bounded Multi-armed Bandits</article-title>
          .
          <source>Arti cial Intelligence</source>
          <volume>214</volume>
          ,
          <fpage>89</fpage>
          {
          <fpage>111</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>