<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Crowdsourcing to Mobile Users: A Study of the Role of Platforms and Tasks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincenzo Della Mea</string-name>
          <email>vincenzo.dellamea@uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eddy Maddalena</string-name>
          <email>eddy.maddalena@uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Mizzaro</string-name>
          <email>mizzaro@uniud.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics and Computer Science University of Udine Udine</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Experimentation</institution>
          ,
          <addr-line>Measurement</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2013</year>
      </pub-date>
      <abstract>
        <p>We study whether the task currently proposed on crowdsourcing platforms are adequate to mobile devices. We aim at understanding both (i) which crowdsourcing platforms, among the existing ones, are more adequate to mobile devices, and (ii) which kinds of tasks are more adequate to mobile devices. Results of a user study hint that: some crowdsourcing platforms seem more adequate to mobile devices than others; some inadequacy issues seem rather superficial and can be resolved by a better task design; some kinds of tasks are more adequate than others; and there might be some unexpected opportunities with mobile devices.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Categories and Subject Descriptors</title>
      <p>H.4.m [Information systems applications]: Miscellaneous</p>
    </sec>
    <sec id="sec-2">
      <title>General Terms</title>
      <p>Crowdsourcing, mobile devices.</p>
    </sec>
    <sec id="sec-3">
      <title>INTRODUCTION AND AIMS</title>
      <p>Among the phenomena that are acquiring increasing
importance in the information technology landscape, two are
the subjects of this paper: (i) crowdsourcing, and (ii) mobile
devices and applications.</p>
      <p>
        Crowdsourcing, i.e., the outsourcing of tasks typically
performed by a few experts to a large crowd as an open call,
has been shown to be reasonably effective in many cases,
like Wikipedia, the Chess match of Kasparov against the
world in 1999, and several others (see, e.g., [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] or even
http://en.wikipedia.org/wiki/Crowdsourcing). Several
crowdsourcing platforms (Amazon Mechanical Turk being
probably the most known) have also appeared on the Web:
Copyright c 2013 for the individual papers by the papers’ authors. Copying
permitted for private and academic purposes. This volume is published and
copyrighted by its editors.
they allow requesters to post the tasks they want to
crowdsource and workers to perform those tasks for a small reward
(usually a few cents).
      </p>
      <p>
        Meanwhile, mobile devices (phones, smartphones, tablets,
and in the near future glasses, watches, and so on) have
become ubiquitous and are used to access the Web.
According to several statistics, in the next few years there will
be more Web accesses by mobile devices than by classical
desktop/laptop computers (see, e.g., [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]).
      </p>
      <p>In this paper we study the intersection of mobile and
crowdsourcing. We aim at understanding whether the task
currently proposed on crowdsourcing platforms are adequate
to mobile devices. By “adequate” we mean that they can
be performed effectively by using a mobile device in place of
a desktop/laptop computer. We specifically seek to answer
two research questions:
Q1 Which crowdsourcing platforms, among the existing ones,
are more adequate to mobile devices?
Q2 Which kinds of tasks are more adequate to mobile
devices?</p>
      <p>Besides the above mentioned statistics on increasing
mobile usage, this research is also justified by the fact that
today quite often people access the Web on their mobile phones
for short periods of time, for example while commuting to
work on train or underground, while waiting for a bus or for
a friend, while in a car (and not driving), while standing in
a queue, etc. In other terms, there is plenty of human
workforce available for a few minutes (or seconds) bursts, and
this kind of workforce seems perfect for the crowdsourcing
scenario, where the tasks are usually short and the reward
is usually low. Moreover, some crowdsourcing tasks could
be more adequate to a mobile scenario than to a classical
desktop one. For example, taking pictures of some point of
interest (like a monument, a paint, or a billboard),
describing a real life scene, or even recording movements,
destinations, and trajectories in an urban traffic setting. However,
to fruitfully exploit this workforce, it is necessary that the
platforms are adequate and tasks are feasible. This
consideration also underlies our choice of focussing on the worker
side and neglecting the requester part.</p>
      <p>The paper is structured as follows. In Section 2 we briefly
survey the related work on mobile and crowdsourcing, trying
to focus on the research involving both aspects. In Sections 3
and 4 we describe two experiments aiming at answering the
two research questions above. In Section 5 we draw
conclusions and sketch future developments.</p>
    </sec>
    <sec id="sec-4">
      <title>RELATED WORK</title>
      <p>Although crowdsourcing commercial platforms seem
designed with a desktop/laptop user in mind, there has
already been some work on the idea of having workers using
mobile devices. We briefly survey it in this section.</p>
      <p>
        Musthag and Ganesan[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] focus on mobile micro-task
market and present some statistics on mobile workers behavior.
      </p>
      <p>
        The mCrowd platform [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] is an iPhone based mobile
crowdsourcing platform that enables mobile users to act
as both requester and workers, and focuses on tasks like
geolocation-a ware image collection, road traffic monitoring,
etc., that exploit the rich array of sensors available on iPhones.
      </p>
      <p>
        Eagle [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] describes txteagle, a mobile crowdsourcing
marketplace used in Kenya and Rwanda for tasks like
translations, polls, and transcriptions.
      </p>
      <p>
        Location-based distribution of tasks to mobile workers is
proposed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Some design criteria for mobile
crowdsourcing platforms are also presented and discussed. A similar
approach, focused on the specific domain of news reporting
is presented in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]: SMS messages are used for location based
assignment for crowdsourcing news.
      </p>
      <p>
        Narula and colleagues [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] focus on low-end mobile devices
and present MobileWorks, a platform for OCR tasks
specifically aimed at users from the developing world.
Experimental results demonstrate a high rate of task completion (120
per hour) and a high accuracy (99%). A similar approach
is presented in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], where the mClerk system is described.
Some experimental results again witness the feasibility of
the approach. Some discussion of the viral diffusion of the
system among workers is also discussed.
      </p>
      <p>
        As a different approach, the CrowdSearch system, an
image search service for mobile phones that relies on Amazon
Mechanical Turk, is presented in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. It is interesting
because, although it does not exploit a mobile crowd, it is an
example of exploiting a crowd in (almost) real time.
      </p>
    </sec>
    <sec id="sec-5">
      <title>EXPERIMENT 1</title>
    </sec>
    <sec id="sec-6">
      <title>Aims</title>
      <p>The first experiment aims to verify the suitability of
existing crowdsourcing platforms to mobile devices (see
question Q1 in Section 1). We asked the participants to estimate
the difficulty of performing a task on both a mobile device
and a desktop/laptop computer.
3.2</p>
    </sec>
    <sec id="sec-7">
      <title>Participants</title>
      <p>Sixteen participants were involved in the experiment. All
of them were italian students, aged between 16 and 30. We
required a good knowledge of English and familiarity with
computers and smartphones. Participants were randomly
subdivided into 4 groups (U1,U2,U3,U4), each one containing
four participants.
3.3</p>
    </sec>
    <sec id="sec-8">
      <title>Data</title>
      <p>
        We selected four among the most popular crowdsourcing
platforms (see Table 1). We downloaded some randomly
selected tasks from these platform, for a total of 2717 tasks
(the exact number for each platform is shown in the third
column in Table 1). The download has been performed in
October and November 2012. The downloaded tasks are
among those that can be performed by any requester, i.e.,
without any qualification. These are not huge samples: for
example, on mTurk one can count hundreds of thousands of
tasks available per month [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Though, the samples are
neither negligible, since they count around 1% − 5%. For each
task we extracted: identifier, title, required proof,
remuneration, time needed, requester identifier, and description. The
task collection is available upon request. Three examples of
tasks in our collection are (errors included):
• Task example 1:
      </p>
      <p>4. Send proof
• Task example 2:
1. Go to http://goo.gl/Dlzk
2. Click the link to go to the download
3. Complete a survey/offer on Sharecash and
download the file</p>
      <sec id="sec-8-1">
        <title>1. Go to http://OneDollarRiches.com/5737</title>
      </sec>
      <sec id="sec-8-2">
        <title>2. Click on Join Now button</title>
      </sec>
      <sec id="sec-8-3">
        <title>3. Invest 1 dollar by logging in into your Alertpay</title>
        <p>account</p>
      </sec>
      <sec id="sec-8-4">
        <title>4. After that enter you personal details and login.</title>
      </sec>
      <sec id="sec-8-5">
        <title>5. Join and finish signing up</title>
        <p>While Sign up use same e-mail of your Alertpay
account. because when u make ur refferaf there 1$ sing
up go direct into ur alterpay account.
• Task example 3: Find the details for this Restaurant
– For this restaurant below, enter the details below
– You must confirm that the restaurant is still open
– Include the full address, e.g. http://www.thechee
secakefactory.com
– Do not include URLs to city guides and listings
like Citysearch</p>
      </sec>
      <sec id="sec-8-6">
        <title>Restaurant : Akasha Organics 160 North Main St.</title>
      </sec>
      <sec id="sec-8-7">
        <title>Ketchum</title>
      </sec>
      <sec id="sec-8-8">
        <title>Fill in the text fields with this information: Still open,</title>
      </sec>
      <sec id="sec-8-9">
        <title>Restaurant name,Website Address,Phone number,Street</title>
      </sec>
      <sec id="sec-8-10">
        <title>Address,City,State,Zip code.</title>
        <p>3.4</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Methods</title>
      <p>We randomly extracted 48 tasks, 12 from each platform,
and divided them into 4 groups (T1, T2, T3, T4). Each group
contains 12 tasks (3 tasks from each of the 4 platforms).
Task group Ti was assigned to user group Ui (e.g., task group
T1 was assigned user group U1). We developed a web
application to show to each participant the group of 12 tasks
assigned to his/her user group (see Figure 1). By using this
application, each participant recorded two estimates of
difficulty for each task, one for a desktop and one for a mobile
device (see the bottom part of the figure). Tasks were
presented in random order and participants did not know from
which platform the tasks were extracted.</p>
      <p>Difficulty was provided on a seven points scale ranging
from trivial to impossible. For each task we therefore
obtained 4 estimates (from the participants in the same group).
We then converted the labels into the [0..6] range and
calculated the average of difficulty estimates.
3.5</p>
    </sec>
    <sec id="sec-10">
      <title>Results</title>
      <p>mTurk
micW
minW
• use of frame attribute in html pages;
• bad layout in a small resolution display;
• need of a high power CPU.</p>
      <p>Some of these task issues seem due to the task content, while
some other depend on how the Web interface is realized.
Many of them seem rather superficial and can be overcome
by a better task design and/or better user interfaces.
e
c
ffreen .075
i
D
0
5
.
1
minW</p>
      <p>The aim of the second experiment is to identify which task
kinds are more adequate for mobile devices (see question Q2
in Section 1). We therefore now focus on task features, and
not on platforms. Also, in place of asking estimates to
participants, we required them to actually perform the tasks
on both desktop and mobile devices and we measured the
time spent on each task. Participants used two prototype
platforms that we built ad hoc for the experiment: one for
desktop devices using Google Web Toolkit, and the other
specifically made for mobile devices, by means of an Android
application. Figure 4 shows the resulting user interfaces.
4.2</p>
    </sec>
    <sec id="sec-11">
      <title>Participants and Data</title>
      <p>The 16 participants (the same as in the previous
experiment) were subdivided into 4 groups labeled U1, U2, U3, U4.</p>
      <p>To identify the kinds of task in a somehow objective way,
we relied on the task categories usually requested in
crowdsourcing marketplaces. More in detail, we started from
the 11 categories suggested by Amazon Mechanical Turk
when creating a new task (see https://requester.mturk.
com/create/projects/new): Categorization, Data
Collection, Moderation of an Image, Sentiment, Survey, Survey
Link, Tagging of an Image, Transcription from A/V,
Transcription from an Image, Writing, and Other. To obtain an
amenable number of categories in our experiment, we
excluded 5 Mechanical Turk categories: Data collection,
Survey and Survey link (considered somehow similar to
Sentiment), Transcription from A/V (to avoid technical issues
on mobile devices), and Other. We therefore selected 6 task
categories, those shown in Table 2. Then we created 4 new
tasks for each category, for a total of 24 tasks, and grouped
them in four task groups (labeled Ta, Tb, Tc, Td), each group
containing six tasks, one from each category.</p>
      <p>Using artificial tasks (i.e., tasks created by ourselves)
allowed to remove any platform bias and those issues discussed
at the end of Section 3.5, that might have affected the
results. Also, their classification was easier (sometimes it is
not clear how to classify real tasks). Finally, this allowed us
to create task descriptions written in Italian, thus
removing any language issue from the experiment (all participants
were Italian native speakers). The created tasks are in all
respects similar to real tasks.
4.3</p>
    </sec>
    <sec id="sec-12">
      <title>Methods</title>
      <p>We took the usual special care to avoid any order and
learning bias. Each participant performed 6 tasks (one for
each of the categories in Table 2) on the desktop platform
and 6 other tasks (again, one for each category) on the
mobile one. His/her tasks were selected from two task groups,
depending on the user group the participant was assigned
to. To further avoid bias, participants in each group
alternatively started from desktop or from mobile. Therefore,
each participant performed a total of 12 different tasks, half
on desktop and half on mobile. Each task was performed by
8 participants in two user groups, half of which performed
it on mobile and half on desktop.</p>
      <p>Statistics have been calculated as follows. At first, the
average time needed for task completion has been calculated
for each task separately for mobile and desktop performance
(i.e., averaged on 4 subjects each). Then category averages
have been calculated from task averages, again separately
for mobile and desktop devices.
4.4</p>
    </sec>
    <sec id="sec-13">
      <title>Results</title>
      <p>Figure 5 shows the average time to complete for a task,
for each category and on both mobile and desktop devices.
Figure 6 shows the differences in average time to complete.
Some tasks are quicker: Cat, Mod, Sen required less than
one minute on average, on both desktop and mobile. ImT
and Tra are a bit longer, between one and two minutes on
average, and Wri is even longer. As expected, all tasks are
faster on desktop, with the only exception of Wri: in it,
the participants autonomously decided to use the
voice-totext functionality when on mobile, and this turned out to
be quicker than writing with a keyboard (although we did
not investigate the quality of transcription). As highlighted
in Figure 6, ImT and Tra show a higher mobile-desktop
difference, both on absolute time and percentage, probably
because they require multiple texts in more fields, a
cumbersome activity if carried out by mobile.</p>
      <p>Looking at the percentage differences in Figure 6, one can
notice that Cat small difference in absolute terms is actually
quite high in percentage: this means that even if the
difference in time is rather small, since Cat tasks are quite short
(as can be seen in Figure 5), this small value is important in
percentage terms. Conversely, looking at the two rightmost
bars, the percentage difference in Wri looks smaller than the
absolute time difference; this is again due to the average
length of the Wri task, which is quite high (see Figure 5).
Though, the improvement on mobile is still important, being
around 20%.</p>
    </sec>
    <sec id="sec-14">
      <title>CONCLUSIONS AND FUTURE WORK</title>
      <p>The work described in this paper is a first exploration of
the opportunities and challenges of outsourcing tasks to a
mobile crowd. Results provide preliminary evidence on the
inadequacy of current crowdsourcing platforms for mobile
devices, even if task complexity would be adequate for being
• Experiment 1 results show that, according to user
perception of difficulty, some crowdsourcing platforms might
be slightly more adequate to mobile devices than
others.
• Some inadequacy issues seem rather superficial and
can be resolved by a better task or interface design.
• Experiment 2 shows that tasks of different kinds, as
defined by mTurk categories, might present different
difficulties when carried out on desktop or on mobile
devices. This might hint a first specialization of task
assignment, although examining features of easy and
difficult tasks might provide a better ad-ho c
specialization, perhaps even independent of the kind of task.
• Experiment 2 also confirms that mobile devices might
offer some unexpected opportunities, like the
voice-totext unexpected (by us) solution, autonomously adopted
by participants.</p>
      <p>We carried out two separate experiments, although
sharing subjects, in order to study two different aspects of
mobile crowdsourcing: crowdsourcing platform effects, and task
category effects. The experiments are preliminary and
results are not final, but this is consistent with our aims, that
were to begin to study the general issue of mobile
crowdsourcing. This exploratory attitude is also a motivation for
having two experiments performed with different
methodologies (asking to the participants an estimate of difficulty
and having participants performing the actual tasks). Of
course, these experiments, or similar ones, could have been
run by means of some crowdsourcing platform themselves.</p>
      <p>We preferred a more traditional approach and started with
0
)s 15
(
e
m
iT 0
0
1
0
0
2
0
5
0
classical user studies, but we do plan to do that in the future.</p>
      <p>To further develop this work, other experiments can be
imagined. For example, the same experiments described
here could be repeated in real-w orld scenarios (on the train,
road, school rooms, or crowded places) to have more
realistic results. It is also feasible to imagine an extended
crowdsourcing platform that on the basis of the context of a
worker (time, date, geolocation, habits and preferences,
mobile device sensors, etc.), automatically filters and selects
tasks tailored for a specific context.
0
6
0
4</p>
      <p>Mod</p>
      <p>ImT</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Alt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Shirazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Kramer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Nawaz</surname>
          </string-name>
          .
          <article-title>Location-based crowdsourcing: extending crowdsourcing to the real world</article-title>
          .
          <source>In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries</source>
          ,
          <source>NordiCHI '10</source>
          , pages
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          , New York, NY, USA,
          <year>2010</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Eagle</surname>
          </string-name>
          . txteagle:
          <article-title>Mobile crowdsourcing</article-title>
          .
          <source>In Proceedings of the 3rd International Conference on Internationalization, Design and Global Development: Held as Part of HCI International</source>
          <year>2009</year>
          , IDGD '
          <volume>09</volume>
          , pages
          <fpage>447</fpage>
          -
          <lpage>4</lpage>
          56, Berlin, Heidelberg,
          <year>2009</year>
          . Springer-V erlag.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Thies</surname>
          </string-name>
          , E. Cutrell, and
          <string-name>
            <given-names>R.</given-names>
            <surname>Balakrishnan</surname>
          </string-name>
          .
          <article-title>mClerk: enabling mobile crowdsourcing in developing regions</article-title>
          .
          <source>In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '12</source>
          , pages
          <fpage>1843</fpage>
          -
          <lpage>1852</lpage>
          , New York, NY, USA,
          <year>2012</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Howe</surname>
          </string-name>
          .
          <article-title>Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business</article-title>
          .
          <source>Random House Inc</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. G.</given-names>
            <surname>Ipeirotis.</surname>
          </string-name>
          <article-title>Analyzing the amazon mechanical turk marketplace</article-title>
          .
          <source>XRDS</source>
          ,
          <volume>17</volume>
          (
          <issue>2</issue>
          ):
          <fpage>16</fpage>
          -
          <lpage>2</lpage>
          1, Dec.
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Meeker</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Wu</surname>
          </string-name>
          .
          <source>Internet Trends D11 Conference - The annual Internet Trends Report</source>
          ,
          <year>2013</year>
          . http://www.slideshare.net/kleinerperkins/ kpcb- internet
          <string-name>
            <surname>-</surname>
          </string-name>
          trends-
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Musthag</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganesan</surname>
          </string-name>
          .
          <article-title>Labor dynamics in a mobile micro-task market</article-title>
          . In W. E. Mackay,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Brewster</surname>
          </string-name>
          , and S. Bødk er, editors,
          <source>CHI</source>
          , pages
          <fpage>641</fpage>
          -
          <lpage>650</lpage>
          . ACM,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Narula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gutheim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Rolnitzky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          , and
          <string-name>
            <surname>B. Hartmann.</surname>
          </string-name>
          <article-title>MobileWorks: A mobile crowdsourcing platform for workers at the bottom of the pyramid</article-title>
          .
          <source>Proc. HCOMP11</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>H.</surname>
          </string-name>
          <article-title>V¨aa¨ta¨j¨a, T</article-title>
          . Vainio, E. Sirkkunen, and
          <string-name>
            <given-names>K.</given-names>
            <surname>Salo</surname>
          </string-name>
          .
          <article-title>Crowdsourced news reporting: supporting news content creation with mobile phones</article-title>
          .
          <source>In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services</source>
          ,
          <source>MobileHCI '11</source>
          , pages
          <fpage>435</fpage>
          -
          <lpage>444</lpage>
          , New York, NY, USA,
          <year>2011</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganesan</surname>
          </string-name>
          .
          <article-title>Crowdsearch: exploiting crowds for accurate real-ti me image search on mobile phones</article-title>
          .
          <source>In MobiSys 1'0: Proceedings of the 8th international conference on Mobile systems, applications and services</source>
          , pages
          <fpage>77</fpage>
          -
          <lpage>90</lpage>
          . ACM Press,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marzilli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Holmes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ganesan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Corner</surname>
          </string-name>
          .
          <article-title>mCrowd: a platform for mobile crowdsourcing</article-title>
          .
          <source>In Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems, SenSys '09</source>
          , pages
          <fpage>347</fpage>
          -
          <lpage>348</lpage>
          , New York, NY, USA,
          <year>2009</year>
          . ACM.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>