=Paper= {{Paper |id=Vol-2934/paper1 |storemode=property |title=How to design taskification in video games. A framework for purposeful game-based crowdsourcing |pdfUrl=https://ceur-ws.org/Vol-2934/paper1.pdf |volume=Vol-2934 |authors=Anna Quecke,Ilaria Mariani }} ==How to design taskification in video games. A framework for purposeful game-based crowdsourcing== https://ceur-ws.org/Vol-2934/paper1.pdf
How to design taskification in video games. A framework for purposeful game-based
crowdsourcing


ANNA QUECKE, Department of Design, Politecnico di Milano; SonicJobs
ILARIA MARIANI, Department of Design, Politecnico di Milano

When it comes to citizen science, games can play an important role encouraging voluntary engagement of the public in
activities contributing to scientific investigation. This considering that, on the one hand, what appears as an interesting and
challenging scientific topic may not be seen as captivating and engaging for the general public; on the other, that to address
a scientific challenge it is often required a level of knowledge that acts a barrier to access. Consequently, it comes the
necessity to develop projects with scientific tasks that can be accomplished by novices, while ensuring the interest of experts.
In this context, game-based crowdsourcing approaches and taskification in particular can serve as powerful motivation
systems. By integrating seamlessly the task into an established game experience, it is possible to target players and direct
them to perform the crowdsourcing activity. Situated at the intersection of the three theoretical domains of gamification,
serious games, and crowdsourcing game systems, this paper presents a framework to taskify games with crowdsourcing
activities. Considering the role that coherently built story-worlds, narrative, and game mechanics play, the framework aims at
providing designers with clear guidance for building purposeful crowdsourcing activities within video games.

CCS CONCEPTS • General and reference~Cross-computing tools and techniques~Design • Human-
centered computing~Collaborative and social computing • Human-centered computing~Human computer
interaction (HCI)~Interactive systems and tools

Additional Keywords and Phrases: Game Design, Taskification, Crowdsourcing, Framework


1 AT THE CROSSROAD OF GAMES AND CROWDSOURCING
Citizen science or crowd-sourced science identifies a typology of scientific research conducted by the public
(crowd), who voluntarily engages in activities and by doing so contributes to scientific investigations [22].
Whereas well-established and has a long history, just recently the practice started to be supported by socio-
computational systems to accompany and sustain participants’ activities, leading steadily to more
experimentations of public participation in scientific research. Citizen science initiatives stem from the much
larger phenomenon of crowdsourcing, which in turn naturally arose from the peculiar “architecture of
participation” of Web 2.0 [23]. The relevance of crowdsourcing lies in the outcomes of distributed large‐scale
communities of volunteers working together and producing coherent, validated, and reliable data [22].
Crowdsourcing is a model of problem-solving that starts by defining a problem to be solved and a goal to be
achieved and scales the task environment by making it accessible to the public [3]. This creates a fertile
environment where an inherently diversified audience per skills, backgrounds and expertise is empowered to

Joint workshop on Games-Human Interaction (GHItaly21) and Multi-party Interaction in eXtended Reality (MIXR21), July 12, 2021, Bolzano (Italy)
Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Authors’ addresses: Anna Quecke, Department of Design, Politecnico di Milano and SonicJobs, annaquecke.design@gmail.com; Ilaria
Mariani, Department of Design, Politecnico di Milano, ilaria1.mariani@polimi.it
participate and influence the resolution of a problem, offering creative and challenging perspectives. The
practice taps into the notion of collective intelligence as a form of intelligence that is distributed, coordinated in
real-time, constantly improving, and that results in an effective mobilisation of knowledge and expertise [18].
The concept relates to what Surowiecki [37] defines the “wisdom of the crowd”, a phenomenon that points out
how under specific conditions groups of people manage to outperform even the best individuals or experts.
Situated in and transversal to several domains, ranging from activities such as tagging or classifying images, to
reporting sightings and counting (especially of animals), up to using problem-solving and reasoning skills to
solve matters of societal relevance, successful citizen-science projects have increased. Involved in
combinations of tasks that can only be performed by human beings, participants make a cooperative effort
towards a common, scientific goal. However, a renowned issue increases the gap between theory and practice:
what appears as an interesting and challenging scientific topic may not be seen as captivating and engaging
for the general public. Additionally, the knowledge required to address a scientific challenge often results in a
barrier to access. Consequently, there is a clear necessity to develop projects with scientific tasks that can be
executed and accomplished by novices, while ensuring the retention of interest and engagement of experts and
lay-experts.
   Given this premise, games have largely been investigated as incentives to involve the public in
crowdsourcing activities. Gamification became a progressively popular approach when designing
crowdsourcing systems [10, 31, 34]. Counting on their ability to engage, provide constant motivation [7, 8, 16],
and succeed in retaining users, while relying on the computational power provided by contemporary technology,
games or gamified systems have been increasingly incorporated into citizen science projects to crowdsource
data. However, literature shows that most studies addressing the topic of citizen science and public engagement
take a quantifiable perspective, focusing on how data obtained or outputs measurement. Few studies attempt
to look into the perspective of the participants and their engagement [12, 24], and fewer take the perspective of
how to improve the design of such crowdsourcing systems.
    Situated in this latter domain of investigation, moving beyond data collection, this research enquires game-
based crowdsourcing systems. Here current matters regard the effectiveness of games such as Foldit [39] and
Eyewire [32] to encourage the general public to partake in a project [29], questioning that games may play a
role in attracting participation from gamers or the general public not initially interested in science-related
contents, while reasoning on the importance of having intrinsic value able to retain players and raise concerns
on the quality of the contributions – concerns particularly relevant in the context of citizen science projects where
data contributes to scientific research [27].
    Acknowledging limits and potentialities, as well as that the discourse on how gamification can influence
motivation [19, 29], effective impact on engagement and participation of volunteers are still open questions [12],
this research poses the attention on taskifying a game, from a design perspective. The reasoning to effectively
design a crowdsourcing system in a gameful context, integrating seamlessly purposeful tasks, is especially
considerate of the role played by story-world [40], narrative, and game mechanics.

1.1 Taskification: features and interplay with neighboring game-based strategies
This research focuses on taskification as a novel and unexplored methodology in game and crowdsourcing
design. Theorized by few [26], academics still not widely endorse it even though its use is already recognizable
in some real case studies. Taskification is the strategy of conceptualizing “the task as just one element or




                                                          2
mechanic to be part of a larger (possibly much larger) game world” [26]. When performing a taskification, the
designer firstly defines the entertainment experience following commercial game design principles and later
embeds an external purposeful task. This task can take different shapes and be either critical to progress inside
the game or an optional activity, e.g. a subquest or minigame with which the player can engage freely.
Prestopnik and Crowston [26] suggest different ways a taskification could occur in a commercial context:
   (i) taskified games might be developed for profit and devolve revenues, along with crowdsourced results,
        to support scientific research;
   (ii) tasks could be implemented as means to unlock game items, content, mechanics, or levels, mutating
         micro-payments systems in casual games and replacing the monetary exchange with a user
         performance.
    The greater case studies of crowdsourcing game taskification – Project Discovery [5], Borderlands Science
[9] – follow the latter approach.
    Taskification can arguably be considered a subset of serious games (SGs, henceforth) development or
gamification. The difference among these methodologies lies in the relation between the non-game task and
the game itself: gamification covers the whole task with a gameful layer, SGs turn the task in the main mechanic
of the whole game, while taskification embeds the task in a small portion of the whole product, usually hiding it
in a huger environment with a great world-building. To strengthen the contrast with gamification, taskification
can be defined as the use of non-entertainment tasks in game contexts, whereas SGs, defined as games
designed for a primary purpose that is not entertainment [1, 38], can not include taskified games, since the latter
are entertainment games augmented with a purposeful task. Moreover taskification, which does not involve the
whole gameplay or experience but only a small part of it, can occur even after the game has been designed,
diverging from SGs development and mirroring gamification process – starting from the task and then gamifying
it.
    Taskification is not recognized as a different process from game design or gamification and designers and
researchers still do not share this concept. It is crucial to recognise taskification as a methodology per se and
do not treat it as a common game design process. Turning an entertainment gameplay into a purposeful one is
arguably a similar approach to gamification or SGs design. Moreover, as for many other innovative phenomena,
the growth of this methodology matches with the rise of its peculiar ethical implications, which require
investigations as soon as possible to avoid dark designs and applications. Game-based crowdsourcing systems
struggle with many ethical issues because they lay in between different fields: they have to face the problem
coming from the controversial use of crowdsourcing, games, gamification and persuasive technology in general.
Hence, issues vary from unethical persuasion [2], exploitation [13, 36], manipulation [14, 30, 36], power
imbalances [36], deception [41] and even physical and psychological damage [14, 30]. In particular, taskification
may be ethical as long as it aims at social innovation, by helping research against cancer or other diseases for
example [4]. However, employing taskification for commercial purposes and profit is highly risky of producing
exploitative dynamics. All considered, it is a chance to unlock a novel methodology to foster social innovation:
so far, taskification has been applied in this sense. Social innovation occurs when innovative ideas meet social
goals [20] and improve society’s capacity to create new social relationships or collaborations [21] and
taskification case studies trace this pattern while providing access to new knowledge.




                                                        3
1.2 Towards a conscious and conscientious design of tasks
Taskification is a particularly useful methodology to build a crowdsourcing system that requires high numbers
of participants, who can be found easily in mainstream games. By integrating seamlessly the task into the game
experience, it is possible to target players and direct them to perform the crowdsourcing activity – an occurrence
which requires further investigation [27]. Better yet, taskification might become the perfect method to intersect
purposeful game design, the creation of meaningful play experiences, the economics of the game industry and
the data requirements of scientists [27]. This is a very promising vision of the advantages of augmenting an
entertainment game with purposeful gameplay through taskification. The experimentations in Borderlands [9]
(borderlands.com/en-US/news/2020-04-07-borderlands-science)            and        EVE        Online         [17]
(eveonline.com/discovery) show that such integrations can result meaningful for players and beneficial for both
the game industry and scientists. Future research should “consider novel trends in games design and
crowdsourcing” [19], and taskification could become one in the near future: only a few gaming companies (for
instance CPP and Gearbox) have implemented scientific tasks in their mainstream games. Moreover, CPP is
iterating on Project Discovery (eveonline.com/discovery), which demonstrates that they find it valuable for the
company and want to keep it part of the game experience. Taskification emerged from the literature review [28]
far less established than gamification and SGs design, but also highly promising to engage users in
crowdsourcing activities as well.
    Hence, this research focuses on taskification as an emergent method in the field of game-based
crowdsourcing and, to encourage its growth, it investigates the process of design taskified games for
crowdsourcing purposes. Indeed, the literature lacked insights on how to taskify and even on how practitioners
reached their designs in real-world case studies. The theoretical understanding of the structure and components
of a taskified game is little and designing one with so low direction might be quite hard. These systems are
complex and intersect many different fields of expertise and they need a design process more conscious and
conscientious to deliver desirable results. Hence, a research question arose quite naturally from these
observations: how to guide a game taskification design process for crowdsourcing?
    Frameworks are an underdeveloped segment of tools in the field of game-based crowdsourcing, where the
attention is mainly drawn to gaining practice-based knowledge. Practitioners and researchers usually rely on
experimental approaches that are ad-hoc for the matter of specific studies [4]. By presenting the core elements
to design and their relationships in a clearly structured way, frameworks sum knowledge related to the specific
field or bridge various fields to form interdisciplinary tools to sustain design processes.


2 RESEARCH AND DESIGN METHODOLOGY
In terms of research methodology, the framework relies on a wide transdisciplinary desk research (n: 131
resources and n: 30 case studies) situated in the game studies field, but reaches out to the domains of media
studies and crowdsourcing studies. The review granted an extensive perspective on fundamental theories and
practices, while identifying various approaches employed to combine games and crowdsourcing. In parallel, the
desk research led to analyse the state of the art in terms of serious games, crowdsourcing games and systems,
identifying and enquiring relevant case studies at the intersection of citizen science projects and games.
Knowledge from different fields was collected, reviewed and synthesized to build a cross-disciplinary tool: a
framework for designing taskified games. To validate its clarity, robustness, and efficacy, the framework was
tested with the groups of game designers through a series of pilots (n: 3) in which participants (n: 9) taskified a




                                                        4
game. The pilots took place over the month of July 2020. Secondary data from the literature review was
triangulated with qualitative primary data obtained from the three pilots as iterations. In each pilot primary data
was collected conducting (i) moderate participant observations [35], balancing between “outsider” and “insider”
roles, and (ii) semi-structured focus groups [6, 25], encouraging reflexivity about the experience. The analysis
aimed at understanding the current limits, barriers, and possibilities of improvement of the framework, within an
iterative design process. The data gathered from each experience informed the framework, leading to
implementations later assessed in the following workshop as a testing ground.


3 RESULTS
The result of this research brought to sum all the knowledge acquired into a single tool which could lead
taskification design. The specific literature on crowdsourcing and game design provides several frameworks
which can be translated in the field of game-based crowdsourcing systems and support their design. Based on
the wide literature review conducted on crowdsourcing and game design [28], a framework to taskify games
was designed with the aim to provide designers with clear guidance for building purposeful crowdsourcing
activities within video games. In the light of the reasoning above, it is situated at the intersection of three
spheres: (i) Gamification, as the use of game design elements in a non-game context exploiting the games
enjoyable features to make non-game activities more fun [7, 8]; (ii) Serious Games, as games designed for a
primary purpose that is not entertainment [1, 38]; and (iii) Crowdsourcing game systems, which engage players
in experiences that produce data to be used for scientific research [26, 27].
  Building on this, the framework has a twofold scope:
  (i) Providing a better understanding of the theories and practices of taskification as the process of
       integrating purposeful activities in entertainment gaming contexts; in doing so, it categorizes elements
       from game studies, media studies and crowdsourcing studies, analysing ongoing practices, and existing
       frameworks for exploiting underlay possibilities;
  (ii) Providing a clear, integrated process for designing game taskification, entailing the different theoretical
       concepts that need to be considered in the process of designing tasks. The lay-theories and their
       interplay are hence combined into a two-step framework aimed at enhancing the players engagement
       in an entertaining crowdsourcing system.




                                                        5
                             Figure 1: Taskification design framework for crowdsourcing activities.

   The framework builds on two established frameworks, Simperl’s [33] framework for crowdsourcing design
and the MDA [11], a well-known tool for game design. Additionally, the framework relies on the SG-related
concept of diegetic connectivity [15] to balance entertainment and tasks. Finally, a systematic review of
guidelines from the literature on game-based crowdsourcing [28] highlighted other fundamental aspects of these
systems that have been implemented in the framework design as well. The resulting framework (Figure 1)
consists of two macro-areas, one on the top and the other on the button, representing respectively (i) the
crowdsourcing design and (ii) the game taskification design.
   The area of crowdsourcing design contains Simperl’s [33] framework: “What”, “Who”, “How to crowdsource”
and “How to incentivize”. “What” refers to the matter that is crowdsourced, the high-level goal that the system
attempts to address. It defines what contributions to expect and what to present to the crowd – which task and
its design, which levels of expertise and tools. Along with the “What”, the “How to crowdsource” supports the
definition of the task, in particular its granularity, transparency and validation. The level of granularity refers if a
task is either a macrotask (presents the activity as a whole) or a microtask (divides the task into smaller pieces).
Transparency determines whether the task is explicit or implicit, the latter being the case of taskified games as
participants’ main activity is playing. Finally, validation refers to the system used to assess contributions, namely
manual control or automatic tools, e.g. algorithms. “Who”, endorsed by the “User” element derived from the
guidelines review, aims at identifying the desired target to be participating in the crowdsourcing activity. It is
crucial to identify what can affect participation, both positively (motivations) and negatively (barriers) and to use
the correct platforms and communication to reach the desired audience. In the case of taskified games, it would
be proper to design a purposeful activity for a game whose players are the ideal target for the task to be




                                                               6
crowdsourced and understand how to make them perceive the value of participating. Finally, “How to
incentivize” leads to reflection on the motivation of participants and how to sustain them. In the case of taskified
games for crowdsourcing purposes, the stimuli to engage with the crowdsourcing activity is the game itself and
players participate because they consider the tasks intrinsically enjoyable or rewarding in the context of the
game.
   The “Ethics” element, another cluster of the guidelines, stands in the middle of the framework of
crowdsourcing design. Ethical matters are a complex topic in game-based crowdsourcing and it is key that they
are always considered through the process of design, as both the employment of crowdsourcing as well as the
use of games to incentivize and persuade can be problematic. Considering the investigation on guidelines, the
major issues on ethics concern the need for both a transparent relationship between the player and the
crowdsourcer and of careful analysis of possible impacts on society and individuals. The following are some
examples of ethical issues to tackle considering each aspect of Simperl’s framework [33]:

  • What and how to crowdsource should not be unethical, e.g. harmful for someone – training an algorithm
    to steal private data for example;
  • Participants should not be vulnerable categories, like children, who hardly could understand if they have
    been manipulated, and should decide to participate in the activity consensually, knowing the terms of use,
    which should be transparent;
  • The system should not incentivize contributions by manipulating users and tricking them into exploitative
      cycles or using dark game design patterns [41].

   The bottom area of the framework referring to game taskification design details the elements from the upper
part and combines the MDA framework [11] and the diegetic connectivity [15]. “What” and “How to crowdsource”
defines the task (T), while “How to incentivize” focus on how to engage users through the game, in particular,
through its mechanics, dynamics and aesthetics – the elements of the MDA framework. Games produce fun
experiences that attract people through their aesthetics elements: narrative, challenges, discovery, and so on.
These experiences are supported by the game dynamics, namely its functioning system. This system in turn
works thanks to the rule that composes it, i.e. the mechanics of the game. By combining and tuning these
elements, games can engage players perfectly. Alongside, participation can be incentivized by community
aspects of a game, a topic that emerged from the guidelines review, which does not necessarily have to be
integrated into the game, e.g. wikis or forums. To connect the task with the MDA, the framework relies on
diegetic connectivity, an approach that connects purposeful activity with games narrative and mechanics.
Diegetic connectivity links the task (T), the story (S) and the mechanics (M). Hence, the task is linked to the
MDA through mechanics and aesthetics, since the latter contains the story, intended as fantasy and narrative
[11]. When designing a taskification, it is crucial to reflect on each of these relations and create new bonds
between the MDA and the task without compromising the balance of mechanics, dynamics and aesthetics
achieved by the game being taskified.

3.2 Testing the framework in field
The framework aims at providing knowledge and structure to design taskified games for crowdsourcing
applications, hence its efficacy was tested in a series of three pilots. Three teams of three people each were
asked to develop a concept of a video game taskification with a citizen science project using the designed




                                                         7
framework as a blueprint for guidance. A call for participants was launched among students and graduates from
Politecnico di Milano and Università Statale di Milano. Nine participants between 25 and 30 y.o. (f: 3; m: 6)
responded to the call and participated in the workshops. Participants were students (n: 2), employees (n: 3) or
fresh graduates (n: 4). They all had previous experience in game design, either by attending game design
classes during their studies (n: 7) or developing a MSc thesis on the topic (n: 2). Their backgrounds are
designerly varied, with the intention of reproducing a small-scale simulation of a typical project team
configuration: computer science (n: 2), automation engineering (n: 1), interaction design (n: 3), game design (n:
2) and communication design (n: 1). Three pilots were set over the month of July 2020, running for three times
the iterative process of testing, observing, implementing. Some time was purposefully left between each pilot
to leave room for the implementation informed by the data gathered from the previous testing. Participants were
arranged into three teams (A, B, C) of three people each, applying the aforementioned distribution of skills,
expertise and background, so as to obtain balanced teams. The team composition was the following:

  • Team A: computer science, student (M); game design, graduate (F); interaction design, graduate (M);
  • Team B: automation engineering, employee (F); game design, student (M); communication design,
    graduate (F);
  • Team C: computer science, employee (M(; interaction design, employee (M); interaction design, graduate
      (M).

   The data collected supported the initial hypothesis that a tool could guide the design of taskification. Six
testers claimed the framework was useful, while two members of Team A and one of Team B expressed
uncertainty. Out of the six participants who were positive about the framework, four stressed that its greater
value was that it allowed them to employ low resources to taskify the game, namely a short time (eight hours)
and a small team (only three members each); in every team there was at least a member who mentioned this
point. Team C even claimed that the result of the taskification process exceeded their expectation and the
framework supported their creative process, leading them to surprising outcomes. The doubts expressed upon
the framework efficacy in leading the design process were about:
   (i) the framework minimum game design expertise requirement;
   (ii) the experimental setting of the workshop, which cannot recreate practical and business issues which
         could occur in attempting a taskification on a commercial off-the-shelf game;
   (iii) the possibility to exceed designing aesthetic elements, neglecting mechanics and dynamics.
   Indeed, the first comment demonstrates how a game designer is crucial in a team aiming to taskify a game
and it is not a tool accessible to everybody. The second one unveils an area for further experimentation and
analysis, namely the application of this framework on a real commercial case study. The latter actually might
be misleading as the participant who pointed it out worked on a particularly narrative game, which may explain
why this team paid so much attention to aesthetics. This circumstance occurred in the first pilot, and was not
repeated in the following iterations, suggesting that improving the explanation of the workshop structure and
tools already solved the issue. Data suggests that the framework can effectively provide knowledge to game
designers to face taskification challenges. In particular, the framework proved to be effective in boosting the
game design process for taskification and stimulating diverse solutions.




                                                       8
4 CONCLUSIONS
Starting from analyzing the field of game-based crowdsourcing, a promising and understudied concept
emerged, identified as taskification. This research acknowledges it as a unique technique and distinguishes it
from gamification and SGs by analyzing the relation between the game and the task in all three cases. In
particular, taskification appears as a less intrusive approach than gamification and SGs design because it
operates only on a small portion of an established experience. Hence, the first contribution of this research is
the positioning taskification and the discussion on why it should be studied separately. Established frameworks
in the fields of game and crowdsourcing design, and a wide number of guidelines derived from related fields
support the tool herein presented, which was developed precisely to guide both the understanding and the
conception of taskified games for crowdsourcing purposes by containing all the essential factors that constitute
those systems. This theoretical framework sums the relevant aspects of both games and crowdsourcing to
sustain the seamless design of additive crowdsourcing systems into the game structure. The tool relies on
interdisciplinary knowledge and in particular combines two frameworks, the MDA [11] and Simperl’s [33]
framework for designing crowdsourcing, and exploits the diegetic connectivity approach [15] to connect them.
The so-formed framework was augmented with the relevant topics derived from the guidelines review
aforementioned.
    Through its testing, the tool demonstrated great potential to reach its. Although it can be clearly improved, it
is a first promising step toward the definition of tools and theories to understand, analyze and shape taskified
games. The study on the ground of this research shows that there are still few examples of taskified games and
little comprehension of the phenomenon, but as it grows, so should its analysis and study as inserting
seamlessly crowdsourcing systems within video games to direct players power is undoubtedly thrilling for its
many possible applications.

REFERENCES
[1]   Abt, C.C. 1987. Serious games. University Press of America.
[2]   Berdichevsky, D. and Neuenschwander, E. 1999. Toward an Ethics of Persuasive Technology. Commun. ACM. 42, 5 (May 1999), 51–
      58. https://doi.org/10.1145/301353.301410.
[3]   Brabham, D.C. 2008. Crowdsourcing as a Model for Problem Solving: An Introduction and Cases. Convergence. 14, 1 (2008), 75–90.
      https://doi.org/10.1177/1354856507084420.
[4]   Chesham, A., Gerber, S.M., Schütz, N., Saner, H., Gutbrod, K., Müri, R.M., Nef, T. and Urwyler, P. 2019. Search and match task:
      development of a Taskified Match-3 puzzle game to assess and practice visual search. JMIR serious games. 7, 2 (2019), e13620.
[5]   CPP 2015. Project Discovery.
[6]   Creswell, J.W. 2008. Research design: Qualitative, quantitative, and mixed methods approaches. Sage.
[7]   Deterding, S., Dixon, D., Khaled, R. and Nacke, L. 2011. From game design elements to gamefulness: defining gamification. (2011), 9–
      15.
[8]   Deterding, S., Sicart, M.A., Nacke, L., O’Hara, K. and Dixon, D. 2011. Gamification. using game-design elements in non-gaming contexts.
      (2011), 2425–2428.
[9]   Gearbox 2020. Borderlands Science.
[10] Hamari, J., Koivisto, J. and Sarsa, H. 2014. Does Gamification Work? -- A Literature Review of Empirical Studies on Gamification. 2014
     47th Hawaii International Conference on System Sciences (Waikoloa, HI, Jan. 2014), 3025–3034.
[11] Hunicke, R., LeBlanc, M. and Zubek, R. 2004. MDA: A formal approach to game design and game research. Proceedings of the AAAI
     Workshop on Challenges in Game AI (2004), 1772.
[12] Iacovides, I., Jennett, C., Cornish-Trestrail, C. and Cox, A.L. 2013. Do Games Attract or Sustain Engagement in Citizen Science? A Study
     of Volunteer Motivations. CHI ’13 Extended Abstracts on Human Factors in Computing Systems (New York, NY, USA, 2013), 1101–1106.
[13] Kaletka, C., Eckhardt, J. and Krüger, D. 2018. Theoretical framework and tools for understanding co-creation in contexts. Technical
     Report #D1.3.
[14] Kim, T.W. and Werbach, K. 2016. More than just a game: ethical issues in gamification. Ethics and Information Technology. 18, 2 (Jun.




                                                                     9
     2016), 157–173. https://doi.org/10.1007/s10676-016-9401-5.
[15] Lane, N. and Prestopnik, N.R. 2017. Diegetic Connectivity: Blending Work and Play with Storytelling in Serious Games. Proceedings of
     the Annual Symposium on Computer-Human Interaction in Play (New York, NY, USA, 2017), 229–240.
[16] Law, E. and von Ahn, L. 2009. Input-Agreement: A New Mechanism for Collecting Data Using Human Computation Games. Proceedings
     of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2009), 1197–1206.
[17] Leifsson, H. and Bjarkason, J.Ö. 2015. Project Discovery-Advancing scientific research by implementing citizen science in EVE Online.
[18] Lévy, P. 1997. Collective intelligence. Plenum/Harper Collins.
[19] Morschheuser, B., Hamari, J. and Koivisto, J. 2016. Gamification in Crowdsourcing: A Review. 2016 49th Hawaii International Conference
     on System Sciences (HICSS) (Jan. 2016), 4375–4384.
[20] Mulgan, G., Tucker, S., Ali, R. and Sanders, B. 2007. Social innovation: what it is, why it matters and how it can be acc elerated. Skoll
     Centre for Social Entrepreneurship, University of Oxford. Oxford.
[21] Murray, R., Caulier-Grice, J. and Mulgan, G. 2010. The open book of social innovation. National Endowment for Science, Technology
     and the Art.
[22] Nov, O., Arazy, O. and Anderson, D. 2011. Dusting for Science: Motivation and Participation of Digital Citizen Science Volunteers.
     Proceedings of the 2011 IConference (New York, NY, USA, 2011), 68–74.
[23] O’reilly, T. 2005. Web 2.0: compact definition.
[24] Phillips, T.B., Ballard, H.L., Lewenstein, B.V. and Bonney, R. 2019. Engagement in science through citizen science: Moving beyond data
     collection. Science Education. 103, 3 (May 2019), 665–690. https://doi.org/10.1002/sce.21501.
[25] Plano Clark, V.L. and Creswell, J.W. 2008. The mixed methods reader. Sage.
[26] Prestopnik, N. and Crowston, K. 2012. Purposeful Gaming & Socio-Computational Systems: A Citizen Science Design Case.
     Proceedings of the 17th ACM International Conference on Supporting Group Work (New York, NY, USA, 2012), 75–84.
[27] Prestopnik, N., Crowston, K. and Wang, J. 2017. Gamers, citizen scientists, and data: Exploring participant contributions in two games
     with a purpose. Computers in Human Behavior. 68, (2017), 254–268. https://doi.org/10.1016/j.chb.2016.11.035.
[28] Quecke, A. 2020. Game taskification for crowdsourcing. A design framework to integrate tasks into digital games. Politecnico di Milano –
     School of Design.
[29] Rotman, D., Preece, J., Hammock, J., Procita, K., Hansen, D., Parr, C., Lewis, D. and Jacobs, D. 2012. Dynamic Changes in Mot ivation
     in Collaborative Citizen-Science Projects. Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (New
     York, NY, USA, 2012), 217–226.
[30] Sandovar, A., Braad, E., Streicher, A. and Söbke, H. 2016. Ethical Stewardship: Designing Serious Games Seriously. Entertainment
     Computing and Serious Games: International GI-Dagstuhl Seminar 15283, Dagstuhl Castle, Germany, July 5-10, 2015, Revised Selected
     Papers. R. Dörner, S. Göbel, M. Kickmeier-Rust, M. Masuch, and K. Zweig, eds. Springer International Publishing. 42–62.
[31] Seaborn, K. and Fels, D.I. 2015. Gamification in theory and action: A survey. International Journal of Human-Computer Studies. 74, (Feb.
     2015), 14–31. https://doi.org/10.1016/j.ijhcs.2014.09.006.
[32] Sebastian Seung of Princeton University (formerly Massachusetts Institute of Technology) 2012. Eyewire.
[33] Simperl, E. 2015. How to Use Crowdsourcing Effectively: Guidelines and Examples. Liber Quarterly. 25, 1 (2015), 18–39.
     https://doi.org/10.18352/lq.9948.
[34] Skarlatidou, A., Hamilton, A., Vitos, M. and Haklay, M. 2019. What do volunteers want from citizen science technologies? A systematic
     literature review and best practice guidelines.               Journal of Science Communication. 18, 01 (Jan. 2019).
     https://doi.org/10.22323/2.18010202.
[35] Spradley, J.P. 1980. Doing participant observation. JP Spradley, Participant observation. (1980), 53–84.
[36] Standing, S. and Standing, C. 2018. The ethical use of crowdsourcing. Business Ethics: A European Review. 27, 1 (2018), 72–80.
     https://doi.org/10.1111/beer.12173.
[37] Surowiecki, J. 2005. The wisdom of crowds. Anchor.
[38] Susi, T., Johannesson, M. and Backlund, P. 2007. Serious games: An overview. (2007).
[39] University of Washington, Center for Game Science, Department of Biochemistry 2008. Foldit.
[40] Wolf, M.J.P. 2012. Building Imaginary Worlds: The Theory and History of Subcreation. Routledge.
[41] Zagal, J.P., Björk, S. and Lewis, C. 2013. Dark patterns in the design of games. Foundations of Digital Games 2013 (2013).




                                                                      10