=Paper=
{{Paper
|id=Vol-1345/keynote1
|storemode=property
|title=Human Intelligence in Search and Retrieval
|pdfUrl=https://ceur-ws.org/Vol-1345/keynote1.pdf
|volume=Vol-1345
}}
==Human Intelligence in Search and Retrieval==
Human Intelligence in Search and Retrieval Carsten Eickhoff Department of Computer Science ETH Zurich, Switzerland ecarsten@inf.ethz.ch Crowdsourcing has developed to become a magic cision reflects the full complexity of the worker moti- bullet for the data and annotation needs of modern vation spectrum. What about education, socializing, day IR researchers. The number of academic studies as vanity, or charity? All of these are valid examples of well as industrial applications that employ the crowd factors that compel people to lend us their work force. for creating, curating, annotating or aggregating doc- This is not to say that we necessarily have to pro- uments is growing steadily. Aside from the multitude mote edufication and all its possible siblings as new of scientific papers relying on crowd labour for system paradigms, they should merely start to take their well evaluation, there has been a strong interdisciplinary deserved space on our mental map of crowdsourcing line of work dedicated to finding effective and efficient incentives. forms of using this emerging labour market. Central In this talk, we will cover a range of interesting sce- research questions include (1) Estimating and opti- narios in which different incentive models may funda- mizing the reliability and accuracy of often untrained mentally change the way in which we can tap the con- workers in comparison with highly trained profession- siderable potential of crowd labour. We will discuss als [1]; (2) How to identify or prevent noise and spam cases in which standard crowdsourcing and gamifica- in the submissions [4]; and (3) How to most cost- tion schemes reach the limits of their capabilities, forc- efficiently distribute tasks and remunerations across ing us to rely on alternative strategies. Finally, we will workers [2]. The vast majority of studies understands investigate whether crowdsourcing indeed even has to crowdsourcing as the act of making micro payments be an active occupation or whether it can happen as to individuals in return for compartmentalized units a by-product of more organic human behaviour. of creative or intelligent labour. Gamification proposes an alternative incentive References model in which entertainment replaces money as the [1] Omar Alonso and Stefano Mizzaro. Can we get rid of motivating force drawing the workers [3]. Under this trec assessors? using mechanical turk for relevance as- alternative paradigm, tasks are embedded in game en- sessment. In Proceedings of the SIGIR 2009 Workshop vironments in order to increase the attractiveness and on the Future of IR Evaluation. immersion of the work interface. While gamification [2] Ruggiero Cavallo and Shaili Jain. Efficient crowdsourc- rightfully points out that paid crowdsourcing is not ing contests. In Proceedings of the International Con- the only viable option for harnessing crowd labour, ference on Autonomous Agents and Multiagent Sys- it is still merely another concrete instantiation of the tems. International Foundation for Autonomous Agents community’s actual need: A formal worker incentive and Multiagent Systems, 2012. model for crowdsourcing. Only by understanding in- [3] Sebastian Deterding, Dan Dixon, Rilla Khaled, and dividual motivations can we deliver truly adequate re- Lennart Nacke. From game design elements to gameful- ward schemes that ensure faithful contributions and ness: defining gamification. In Proceedings of the Inter- long-term worker engagement. It is unreasonable to national Academic MindTrek Conference: Envisioning assume that the binary money vs. entertainment de- Future Media Environments. ACM, 2011. [4] Pei-Yun Hsueh, Prem Melville, and Vikas Sindhwani. Copyright c 2015 for the individual papers by the paper’s au- Data quality from crowdsourcing: a study of annota- thors. Copying permitted for private and academic purposes. tion selection criteria. In Proceedings of the NAACL This volume is published and copyrighted by its editors. HLT 2009 workshop on active learning for natural lan- In: F. Hopfgartner, G. Kazai, U. Kruschwitz, and M. Meder guage processing. Association for Computational Lin- (eds.): Proceedings of the GamifIR’15 Workshop, Vienna, Aus- guistics, 2009. tria, 29-March-2015, published at http://ceur-ws.org