<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Crowdsourcing to Mobile Users: A Study of the Role of Platforms and Tasks</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vincenzo</forename><surname>Della</surname></persName>
							<email>vincenzo.dellamea@uniud.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Computer Science</orgName>
								<orgName type="institution">University of Udine Udine</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mea</forename><forename type="middle">Eddy</forename><surname>Maddalena</surname></persName>
							<email>eddy.maddalena@uniud.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Computer Science</orgName>
								<orgName type="institution">University of Udine Udine</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stefano</forename><surname>Mizzaro</surname></persName>
							<email>mizzaro@uniud.it</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Mathematics and Computer Science</orgName>
								<orgName type="institution">University of Udine Udine</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Crowdsourcing to Mobile Users: A Study of the Role of Platforms and Tasks</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">398B1CA6EC8150C3FF75FA5162617006</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T01:23+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>H.4.m [Information systems applications]: Miscellaneous General Terms Experimentation</term>
					<term>Measurement Crowdsourcing</term>
					<term>mobile devices</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We study whether the task currently proposed on crowdsourcing platforms are adequate to mobile devices. We aim at understanding both (i) which crowdsourcing platforms, among the existing ones, are more adequate to mobile devices, and (ii) which kinds of tasks are more adequate to mobile devices. Results of a user study hint that: some crowdsourcing platforms seem more adequate to mobile devices than others; some inadequacy issues seem rather superficial and can be resolved by a better task design; some kinds of tasks are more adequate than others; and there might be some unexpected opportunities with mobile devices.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Besides the above mentioned statistics on increasing mobile usage, this research is also justified by the fact that today quite often people access the Web on their mobile phones for short periods of time, for example while commuting to work on train or underground, while waiting for a bus or for a friend, while in a car (and not driving), while standing in a queue, etc. In other terms, there is plenty of human workforce available for a few minutes (or seconds) bursts, and this kind of workforce seems perfect for the crowdsourcing scenario, where the tasks are usually short and the reward is usually low. Moreover, some crowdsourcing tasks could be more adequate to a mobile scenario than to a classical desktop one. For example, taking pictures of some point of interest (like a monument, a paint, or a billboard), describing a real life scene, or even recording movements, destinations, and trajectories in an urban traffic setting. However, to fruitfully exploit this workforce, it is necessary that the platforms are adequate and tasks are feasible. This consideration also underlies our choice of focussing on the worker side and neglecting the requester part.</p><p>The paper is structured as follows. In Section 2 we briefly survey the related work on mobile and crowdsourcing, trying to focus on the research involving both aspects. In Sections 3 and 4 we describe two experiments aiming at answering the two research questions above. In Section 5 we draw conclusions and sketch future developments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>1</head><p>DBCrowd 2013: First VLDB Workshop on Databases and Crowdsourcing</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">RELATED WORK</head><p>Although crowdsourcing commercial platforms seem designed with a desktop/laptop user in mind, there has already been some work on the idea of having workers using mobile devices. We briefly survey it in this section.</p><p>Musthag and Ganesan <ref type="bibr" target="#b7">[7]</ref> focus on mobile micro-task market and present some statistics on mobile workers behavior.</p><p>The mCrowd platform <ref type="bibr" target="#b11">[11]</ref> is an iPhone based mobile crowdsourcing platform that enables mobile users to act as both requester and workers, and focuses on tasks like geolocation-aware image collection, road traffic monitoring, etc., that exploit the rich array of sensors available on iPhones.</p><p>Eagle <ref type="bibr" target="#b2">[2]</ref> describes txteagle, a mobile crowdsourcing marketplace used in Kenya and Rwanda for tasks like translations, polls, and transcriptions.</p><p>Location-based distribution of tasks to mobile workers is proposed in <ref type="bibr" target="#b1">[1]</ref>. Some design criteria for mobile crowdsourcing platforms are also presented and discussed. A similar approach, focused on the specific domain of news reporting is presented in <ref type="bibr" target="#b9">[9]</ref>: SMS messages are used for location based assignment for crowdsourcing news.</p><p>Narula and colleagues <ref type="bibr" target="#b8">[8]</ref> focus on low-end mobile devices and present MobileWorks, a platform for OCR tasks specifically aimed at users from the developing world. Experimental results demonstrate a high rate of task completion (120 per hour) and a high accuracy (99%). A similar approach is presented in <ref type="bibr" target="#b3">[3]</ref>, where the mClerk system is described. Some experimental results again witness the feasibility of the approach. Some discussion of the viral diffusion of the system among workers is also discussed.</p><p>As a different approach, the CrowdSearch system, an image search service for mobile phones that relies on Amazon Mechanical Turk, is presented in <ref type="bibr" target="#b10">[10]</ref>. It is interesting because, although it does not exploit a mobile crowd, it is an example of exploiting a crowd in (almost) real time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">EXPERIMENT 1 3.1 Aims</head><p>The first experiment aims to verify the suitability of existing crowdsourcing platforms to mobile devices (see question Q1 in Section 1). We asked the participants to estimate the difficulty of performing a task on both a mobile device and a desktop/laptop computer.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Participants</head><p>Sixteen participants were involved in the experiment. All of them were italian students, aged between 16 and 30. We required a good knowledge of English and familiarity with computers and smartphones. Participants were randomly subdivided into 4 groups (U1,U2,U3,U4), each one containing four participants.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Data</head><p>We selected four among the most popular crowdsourcing platforms (see Table <ref type="table" target="#tab_0">1</ref>). We downloaded some randomly selected tasks from these platform, for a total of 2717 tasks (the exact number for each platform is shown in the third column in Table <ref type="table" target="#tab_0">1</ref>). The download has been performed in October and November 2012. The downloaded tasks are among those that can be performed by any requester, i.e., without any qualification. These are not huge samples: for example, on mTurk one can count hundreds of thousands of </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Join and finish signing up</head><p>While Sign up use same e-mail of your Alertpay account. because when u make ur refferaf there 1$ sing up go direct into ur alterpay account. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Methods</head><p>We randomly extracted 48 tasks, 12 from each platform, and divided them into 4 groups (T1, T2, T3, T4). Each group contains 12 tasks (3 tasks from each of the 4 platforms). Task group Ti was assigned to user group Ui (e.g., task group T1 was assigned user group U1). We developed a web application to show to each participant the group of 12 tasks assigned to his/her user group (see Figure <ref type="figure" target="#fig_0">1</ref>). By using this application, each participant recorded two estimates of difficulty for each task, one for a desktop and one for a mobile device (see the bottom part of the figure). Tasks were presented in random order and participants did not know from which platform the tasks were extracted.</p><p>Difficulty was provided on a seven points scale ranging from trivial to impossible. For each task we therefore obtained 4 estimates (from the participants in the same group). We then converted the labels into the [0..6] range and calculated the average of difficulty estimates.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5">Results</head><p>Figure <ref type="figure" target="#fig_1">2</ref> shows the averaged estimated difficulty, on desktop and mobile, for each platform. Tasks from mTurk are estimated slightly more difficult than MicroWorkers, Min-uteWorkers, and ShortTask. The difference of difficulty estimates between desktop and mobile is also shown in Figure <ref type="figure" target="#fig_2">3</ref>: difficulty estimation is consistently higher on mobile devices, both in absolute terms and as a percentage of the desktop difficulty.</p><p>By manually analyzing the task collection we realized that some of them are inadequate to mobile devices for some typical reasons:</p><p>• too long description;</p><p>• technical obstacles like scrolling problems, unsupported audio formats and/or plugins, pages with Adobe Flash, etc.; • bad layout in a small resolution display;</p><p>• need of a high power CPU.</p><p>Some of these task issues seem due to the task content, while some other depend on how the Web interface is realized. Many of them seem rather superficial and can be overcome by a better task design and/or better user interfaces. 4. EXPERIMENT 2</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Aims</head><p>The aim of the second experiment is to identify which task kinds are more adequate for mobile devices (see question Q2 in Section 1). We therefore now focus on task features, and not on platforms. Also, in place of asking estimates to participants, we required them to actually perform the tasks on both desktop and mobile devices and we measured the time spent on each task. Participants used two prototype platforms that we built ad hoc for the experiment: one for desktop devices using Google Web Toolkit, and the other specifically made for mobile devices, by means of an Android application. Figure <ref type="figure" target="#fig_3">4</ref> shows the resulting user interfaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Participants and Data</head><p>The 16 participants (the same as in the previous experiment) were subdivided into 4 groups labeled U1, U2, U3, U4.</p><p>To identify the kinds of task in a somehow objective way, we relied on the task categories usually requested in crowdsourcing marketplaces. More in detail, we started from the 11 categories suggested by Amazon Mechanical Turk when creating a new task (see https://requester.mturk. com/create/projects/new): Categorization, Data Collection, Moderation of an Image, Sentiment, Survey, Survey Link, Tagging of an Image, Transcription from A/V, Transcription from an Image, Writing, and Other. To obtain an amenable number of categories in our experiment, we excluded 5 Mechanical Turk categories: Data collection, Survey and Survey link (considered somehow similar to Sentiment), Transcription from A/V (to avoid technical issues on mobile devices), and Other. We therefore selected 6 task categories, those shown in Table <ref type="table">2</ref>. Then we created 4 new tasks for each category, for a total of 24 tasks, and grouped them in four task groups (labeled Ta, T b , Tc, T d ), each group containing six tasks, one from each category.</p><p>Using artificial tasks (i.e., tasks created by ourselves) allowed to remove any platform bias and those issues discussed at the end of Section 3.5, that might have affected the re-sults. Also, their classification was easier (sometimes it is not clear how to classify real tasks). Finally, this allowed us to create task descriptions written in Italian, thus removing any language issue from the experiment (all participants were Italian native speakers). The created tasks are in all respects similar to real tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Methods</head><p>We took the usual special care to avoid any order and learning bias. Each participant performed 6 tasks (one for each of the categories in Table <ref type="table">2</ref>) on the desktop platform and 6 other tasks (again, one for each category) on the mobile one. His/her tasks were selected from two task groups, depending on the user group the participant was assigned to. To further avoid bias, participants in each group alternatively started from desktop or from mobile. Therefore, each participant performed a total of 12 different tasks, half on desktop and half on mobile. Each task was performed by 8 participants in two user groups, half of which performed it on mobile and half on desktop.</p><p>Statistics have been calculated as follows. At first, the average time needed for task completion has been calculated for each task separately for mobile and desktop performance (i.e., averaged on 4 subjects each). Then category averages have been calculated from task averages, again separately for mobile and desktop devices.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4">Results</head><p>Figure <ref type="figure" target="#fig_5">5</ref> shows the average time to complete for a task, for each category and on both mobile and desktop devices. Figure <ref type="figure">6</ref> shows the differences in average time to complete. Some tasks are quicker: Cat, Mod, Sen required less than one minute on average, on both desktop and mobile. ImT and Tra are a bit longer, between one and two minutes on average, and Wri is even longer. As expected, all tasks are faster on desktop, with the only exception of Wri: in it, the participants autonomously decided to use the voice-totext functionality when on mobile, and this turned out to be quicker than writing with a keyboard (although we did not investigate the quality of transcription). As highlighted in Figure <ref type="figure">6</ref>, ImT and Tra show a higher mobile-desktop difference, both on absolute time and percentage, probably because they require multiple texts in more fields, a cumbersome activity if carried out by mobile.</p><p>Looking at the percentage differences in Figure <ref type="figure">6</ref>, one can notice that Cat small difference in absolute terms is actually quite high in percentage: this means that even if the difference in time is rather small, since Cat tasks are quite short (as can be seen in Figure <ref type="figure" target="#fig_5">5</ref>), this small value is important in percentage terms. Conversely, looking at the two rightmost bars, the percentage difference in Wri looks smaller than the absolute time difference; this is again due to the average length of the Wri task, which is quite high (see Figure <ref type="figure" target="#fig_5">5</ref>). Though, the improvement on mobile is still important, being around 20%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">CONCLUSIONS AND FUTURE WORK</head><p>The work described in this paper is a first exploration of the opportunities and challenges of outsourcing tasks to a mobile crowd. Results provide preliminary evidence on the inadequacy of current crowdsourcing platforms for mobile devices, even if task complexity would be adequate for being Content categorization Some images are proposed to the worker, which is required to assign each of them to the correct category.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mod Moderation of an image</head><p>The worker is required to flag adult contente pictures that are inappropriate for children.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Sen</head><p>Sentiment Some sentences are proposed to the worker, which is required to record his agreement by means of a Likert scale. ImT Image tagging Some images are proposed to worker, which is required to tag each of them with keywords.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Tra</head><p>Transcription from an image The worker is required to extract and write the textual content from a picture.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Wri</head><p>Writing The worker is required to write a short text about a specific topic.</p><p>Table <ref type="table">2</ref>: Task categories carried out on mobile scenarios. More in detail, results are fourfold:</p><p>• Experiment 1 results show that, according to user perception of difficulty, some crowdsourcing platforms might be slightly more adequate to mobile devices than others.</p><p>• Some inadequacy issues seem rather superficial and can be resolved by a better task or interface design.</p><p>• Experiment 2 shows that tasks of different kinds, as defined by mTurk categories, might present different difficulties when carried out on desktop or on mobile devices. This might hint a first specialization of task assignment, although examining features of easy and difficult tasks might provide a better ad-hoc specialization, perhaps even independent of the kind of task.</p><p>• Experiment 2 also confirms that mobile devices might offer some unexpected opportunities, like the voice-totext unexpected (by us) solution, autonomously adopted by participants.</p><p>We carried out two separate experiments, although sharing subjects, in order to study two different aspects of mobile crowdsourcing: crowdsourcing platform effects, and task category effects. The experiments are preliminary and results are not final, but this is consistent with our aims, that were to begin to study the general issue of mobile crowdsourcing. This exploratory attitude is also a motivation for having two experiments performed with different methodologies (asking to the participants an estimate of difficulty and having participants performing the actual tasks). Of course, these experiments, or similar ones, could have been run by means of some crowdsourcing platform themselves. We preferred a more traditional approach and started with  To further develop this work, other experiments can be imagined. For example, the same experiments described here could be repeated in real-world scenarios (on the train, road, school rooms, or crowded places) to have more realistic results. It is also feasible to imagine an extended crowdsourcing platform that on the basis of the context of a worker (time, date, geolocation, habits and preferences, mobile device sensors, etc.), automatically filters and selects tasks tailored for a specific context.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The interface used in the first experiment (translated into English)</figDesc><graphic coords="3,88.86,87.94,415.34,311.51" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Estimated difficulty</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Mobile-desktop difference of estimated difficulty, as absolute time (bars on the left) and as a fraction (right)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: The interface used in the second experiment: desktop (left) and mobile (right) Id Category Description CatContent categorization Some images are proposed to the worker, which is required to assign each of them to the correct category.Mod Moderation of an imageThe worker is required to flag adult contente pictures that are inappropriate for children.SenSentiment Some sentences are proposed to the worker, which is required to record his agreement by means of a Likert scale. ImT Image tagging Some images are proposed to worker, which is required to tag each of them with keywords.TraTranscription from an image The worker is required to extract and write the textual content from a picture.WriWriting The worker is required to write a short text about a specific topic.</figDesc><graphic coords="5,59.19,87.94,366.46,279.09" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Average time to complete for each task category on both mobile and desktop devices</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Platformstasks available per month<ref type="bibr" target="#b5">[5]</ref>. Though, the samples are neither negligible, since they count around 1% − 5%. For each task we extracted: identifier, title, required proof, remuneration, time needed, requester identifier, and description. The task collection is available upon request. Three examples of tasks in our collection are (errors included):</figDesc><table><row><cell>id</cell><cell cols="2">Platform name URL</cell><cell># of</cell></row><row><cell></cell><cell></cell><cell></cell><cell>tasks</cell></row><row><cell cols="2">mTurk Amazon</cell><cell>mturk.com</cell><cell>1154</cell></row><row><cell></cell><cell>Mechanical Turk</cell><cell></cell><cell></cell></row><row><cell>micW</cell><cell>Micro Workers</cell><cell>microworkers.com</cell><cell>1302</cell></row><row><cell cols="2">minW Minute Workers</cell><cell>minuteworkers.com</cell><cell>86</cell></row><row><cell cols="2">shortT Short Task</cell><cell>shorttask.com</cell><cell>175</cell></row></table><note>• Task example 1: 1. Go to http://goo.gl/Dlzk 2. Click the link to go to the download 3. Complete a survey/offer on Sharecash and download the file 4. Send proof • Task example 2: 1. Go to http://OneDollarRiches.com/5737 2. Click on Join Now button 3. Invest 1 dollar by logging in into your Alertpay account 4. After that enter you personal details and login.</note></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Cat</head></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title/>
		<author>
			<persName><surname>References</surname></persName>
		</author>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Location-based crowdsourcing: extending crowdsourcing to the real world</title>
		<author>
			<persName><forename type="first">F</forename><surname>Alt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Shirazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Kramer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Nawaz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, NordiCHI &apos;10</title>
				<meeting>the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, NordiCHI &apos;10<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="13" to="22" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">txteagle: Mobile crowdsourcing</title>
		<author>
			<persName><forename type="first">N</forename><surname>Eagle</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 3rd International Conference on Internationalization, Design and Global Development: Held as Part of HCI International 2009, IDGD &apos;09</title>
				<meeting>the 3rd International Conference on Internationalization, Design and Global Development: Held as Part of HCI International 2009, IDGD &apos;09<address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="447" to="456" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">mClerk: enabling mobile crowdsourcing in developing regions</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Thies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Cutrell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Balakrishnan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI &apos;12</title>
				<meeting>the SIGCHI Conference on Human Factors in Computing Systems, CHI &apos;12<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1843" to="1852" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business</title>
		<author>
			<persName><forename type="first">J</forename><surname>Howe</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>Random House Inc</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Analyzing the amazon mechanical turk marketplace</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">G</forename><surname>Ipeirotis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">XRDS</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="16" to="21" />
			<date type="published" when="2010-12">Dec. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Meeker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wu</surname></persName>
		</author>
		<ptr target="http://www.slideshare.net/kleinerperkins/kpcb-internet-trends-2013" />
		<title level="m">Internet Trends D11 Conference -The annual Internet Trends Report</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Labor dynamics in a mobile micro-task market</title>
		<author>
			<persName><forename type="first">M</forename><surname>Musthag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ganesan</surname></persName>
		</author>
		<editor>W. E. Mackay, S. A. Brewster, and S. Bødker</editor>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>ACM</publisher>
			<biblScope unit="page" from="641" to="650" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">MobileWorks: A mobile crowdsourcing platform for workers at the bottom of the pyramid</title>
		<author>
			<persName><forename type="first">P</forename><surname>Narula</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gutheim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Rolnitzky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kulkarni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hartmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. HCOMP11</title>
				<meeting>HCOMP11</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Crowdsourced news reporting: supporting news content creation with mobile phones</title>
		<author>
			<persName><forename type="first">H</forename><surname>Väätäjä</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Vainio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sirkkunen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Salo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI &apos;11</title>
				<meeting>the 13th International Conference on Human Computer Interaction with Mobile Devices and Services, MobileHCI &apos;11<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="435" to="444" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones</title>
		<author>
			<persName><forename type="first">T</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ganesan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">MobiSys &apos;10: Proceedings of the 8th international conference on Mobile systems, applications and services</title>
				<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="77" to="90" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">mCrowd: a platform for mobile crowdsourcing</title>
		<author>
			<persName><forename type="first">T</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Marzilli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Holmes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ganesan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Corner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems, SenSys &apos;09</title>
				<meeting>the 7th ACM Conference on Embedded Networked Sensor Systems, SenSys &apos;09<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="347" to="348" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
