<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Promoting Trustworthy AI in mHealth: a Gamified Approach to Value-Sensitive Design</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Maria</forename><forename type="middle">Inês</forename><surname>Ribeiro</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Technical University of Eindhoven Eindhoven</orgName>
								<address>
									<country key="NL">Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Laura</forename><surname>Genga</surname></persName>
							<email>l.genga@tue.nl</email>
							<affiliation key="aff1">
								<orgName type="institution">Wageningen University &amp; Research Wageningen</orgName>
								<address>
									<country key="NL">Netherlands</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Monique</forename><surname>Simons</surname></persName>
							<email>monique.simons@wur.nl</email>
						</author>
						<author>
							<persName><forename type="first">Pieter</forename><surname>Van Gorp</surname></persName>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Third International Conference on Hybrid Human-Artificial Intelligence (HHAI)</orgName>
								<address>
									<addrLine>June 10-14</addrLine>
									<postCode>2024</postCode>
									<settlement>Malmö</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Promoting Trustworthy AI in mHealth: a Gamified Approach to Value-Sensitive Design</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">2B71183771412E88A56254AECFDAD181</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>mHealth</term>
					<term>Trustworthy AI</term>
					<term>Value-Sensitive Design</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The rise of mobile health (mHealth) apps leveraging AI and wearables to promote healthy lifestyles is accompanied by growing ethical concerns among the public, developers, and policymakers. While AI guidelines exist to mitigate concerns, translating them to practical design requirements remains challenging. This research proposes a gamified approach to help bridge the gap between theory and practice in Value-Sensitive Design (VSD) for AI applications in mHealth. This approach aims to facilitate the development of trustworthy AI by aligning design with stakeholder ethical values. Using the design science methodology, we developed a card game to improve stakeholder participation, foster an understanding of AI in mHealth, and facilitate in-depth ethical discussions. Pilot-testing with 19 peer researchers showed active engagement and motivation of players through self-discovery. The findings highlight the game's potential to elicit ethical discussions and promote an understanding of AI's real-world implications. Future iterations could explore digital, blended, or survey formats to enhance engagement, accessibility, and depth of insights, catering to diverse stakeholder preferences. This gamified approach to VSD holds promise as a tool for supporting the development of trustworthy AI technologies in healthcare, aligned with stakeholder values. Further validation with broader stakeholder groups and a longitudinal impact assessment are needed.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent years, mHealth apps that track our sleep patterns, heart rate, and activity levels have become increasingly popular to promote healthy lifestyle behaviors. While these personalized health tools powered with AI technology and wearables data hold immense potential, recent ethical incidents raise concerns and spark alarm among the public, developers, and policymakers <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>.</p><p>Ethical frameworks and regulations are emerging to mitigate these concerns and ensure trustworthy AI development. For instance, the High-level expert group on AI from the European Commission (EC) advocates for a human-centered approach grounded by the ethical principles of respect for human autonomy, prevention against harm, fairness, and explicability <ref type="bibr" target="#b3">[3]</ref>. A self-assessment checklist is available as a tool for AI developers to implement these principles <ref type="bibr" target="#b4">[4]</ref>. Yet, checklist-based approaches lack practical implementation details, leaving developers to navigate complex ethical dilemmas and tensions between diverse stakeholder ethical values <ref type="bibr" target="#b5">[5]</ref>. In mHealth apps, conflicts between data privacy and personalized lifestyle recommendations are particularly evident. For example, an app may collect health and lifestyle data to predict opportunistic moments for suggesting a walk. This app could bring health benefits with increased physical activity but also poses privacy risks as sensitive health data could be exposed. What is then more valuable: data privacy or health benefits?</p><p>To address these complex trade-offs, several design approaches can inform the integration of stakeholder values in the design process of AI technology. While User-Centered Design prioritizes user experience and Privacy by Design focuses on data protection, VSD offers a more comprehensive framework, analyzing AI ethics across individual, group, and societal levels, and aiming at the symbiotic evolution of technology and societal norms <ref type="bibr" target="#b6">[6,</ref><ref type="bibr" target="#b7">7,</ref><ref type="bibr" target="#b8">8]</ref>.</p><p>Multiple methods have been employed to elicit values in VSD, such as the Value Scenario method, which emphasizes technology implications in narrated use cases, or the Value-oriented Mock-up, Prototype, or Field Deployment method, which investigates values implications in realworld contexts <ref type="bibr" target="#b9">[9]</ref>. Despite these efforts, current VSD methods face several limitations, often addressing only one or two of these key challenges: (1) recruiting and engaging stakeholders in focus groups; (2) providing enough technical and ethical AI understanding to stakeholders, and (3) eliciting ethical discussions that allow for translating abstract findings into actionable requirements for AI developers <ref type="bibr" target="#b6">[6]</ref>.</p><p>A gamified tool seems intuitively capable of addressing these challenges simultaneously. First, games are inherently engaging, attracting and retaining stakeholder participation better than traditional methods. Second, they may simulate complex scenarios and provide immediate feedback, helping stakeholders grasp AI's technical and ethical dimensions without prior expertise. Third, the structured yet flexible nature of games allows for quantitative tracking of decisions and actions, providing concrete data for actionable design requirements.</p><p>This research proposes to support VSD with a gamified approach. We developed and pilottested a card game to elicit and explore stakeholder values regarding the use of AI in mHealth apps. This approach seeks to provide practical insights that can enhance the effectiveness of VSD in guiding the development of trustworthy AI technology.</p><p>The structure of the remainder of this paper is as follows: Section 2 outlines the methods used to develop and pilot-test the gamified approach; Section 3 presents the key findings from the pilot tests; Section 4 discusses the adherence of the game to its objectives and potential future directions for refining the game; and Section 5 concludes with a summary of key points and suggestions for further research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methods</head><p>In this study, we employed the design science methodology framework to develop and refine a game exploring stakeholder values and ethical considerations in using AI for mHealth apps <ref type="bibr" target="#b10">[10]</ref>. The overall goal was to provide a practical tool to support VSD. This preliminary version of the game was designed for a general population, assessing its acceptance of using private data for generating personalized lifestyle recommendations. This section outlines the game objectives, design, and pilot testing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Game Objectives</head><p>The game aimed to achieve the following objectives:</p><p>Objective 1: Enhance Recruitment and Engagement. Leverage gamification to create an interactive and captivating experience for stakeholders during focus groups.</p><p>Objective 2: Provide Understanding of AI. Present concrete examples of AI applications and implications to guide the definition of AI design requirements by assessing ethical concerns on AI-specific uses.</p><p>Objective 3: Elicit In-Depth Ethical Discussions. Engage stakeholders in structured discussions on AI in mHealth to gain insights on specific ethical considerations relevant to AI design and development.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Game Design</head><p>The game design adheres to the Mechanics-Dynamics-Aesthetics framework to create an engaging exploration of AI ethical considerations in mHealth apps with a trade-off between data privacy and personalized lifestyle recommendations <ref type="bibr" target="#b11">[11]</ref>.</p><p>To enhance recruitment and engagement (Objective 1), the game offers intrinsic and extrinsic rewards. At the beginning of a game session, players were motivated to embark on a selfdiscovery journey fostering reflection on personal ethical values (intrinsic reward) while earning AI user-type badges (extrinsic reward). This AI user type was defined based on the prevalence of each participant's ethical concerns categorized according to the four ethical principles of trustworthy AI defined by the EC <ref type="bibr" target="#b3">[3]</ref>. During the game, participants encountered multiple ethical dilemmas that required them to weigh competing values and priorities when interacting with mHealth apps. The game provided a safe and comfortable social environment for open and honest discussions about ethical concerns, promoting community and empathy among players.</p><p>Featuring a card game, the core game mechanics revolved around Black Cards presenting AI-generated lifestyle recommendations with five possible human reactions (Figure <ref type="figure" target="#fig_0">1</ref>), aiming to promote understanding of AI's real-world implications (Objective 2). Such prompts were linked to at least one AI development decision, e.g. 'Is it acceptable to use GPX location to recommend convenient and nearby walking routes?'. Players individually chose a White Card, labeled from A to E, reflecting their preferred reaction to the AI prompt, and placed it facing down on the table. A moderator facilitated discussion in each round as players shared and debated their choices (Objective 3). Encrypted color coding on Score Keeping Cards tracks player decisions. Upon game over, players earn the AI User-Type Badges reflecting their ethical priorities based on gameplay and revealed by the Game Over Card.</p><p>E. While I appreciate your initiative, I will do whatever I want. And that is very human. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Pilot Testing</head><p>We conducted two 90-minute focus groups to pilot-test the game. A total of 19 researchers were recruited through convenience sampling at our affiliated universities (Eindhoven University of Technology and Wageningen University &amp; Research). Both sessions follow a similar agenda: introduction explaining the game and its objectives (10 minutes), gameplay (six rounds or 45 minutes), and feedback (35 minutes). Observations during gameplay from both focus groups were used to evaluate the game's adherence to its key objectives. The feedback sessions served to ideate new game mechanics, dynamics, and aesthetics and refine the game design for future iterations. Focus Group 1 engaged at first in spontaneous feedback and then collaborated in a co-creation task using the gamification model canvas to refine the game design <ref type="bibr" target="#b12">[12]</ref>. Focus Group 2 participated in a semi-structured discussion with predefined questions to guide the co-creation process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results</head><p>We briefly report the most significant findings and related participants' suggestions for game refinement offered in focus groups.</p><p>Finding 1: Motivation through Self-Discovery. Participants found that uncovering their AI user type was a strong motivator for participation. While some players found the assigned type aligned with their values, others desired more rounds for a clearer picture. One participant recommended using a subset of AI scenarios in digital format as a teaser to recruit players.</p><p>Finding 2: Player Engagement and Relatedness. Participants expressed joy in gameplay, reporting higher engagement when scenarios resonated with personal experiences. The alignment of AI prompts with personal interests significantly influenced their reactions and investment in the gameplay. Participants suggested avoiding overly specific (e.g., detailed activities timing) and incorporating open-ended response options to encourage imagination and enhance connection to the scenarios.</p><p>Finding 3: Discussion. engaged in discussions prompted by the game's ethical the challenge of selecting a single answer limited choices. To this, they proposed implementing a ranking score system to allow more responses. participants were about the benefits of discussions for uncovering their AI user type. suggested clarifying discussion goals and offering incentives for active participation. Additionally, participants recommended using a centralized moderator and AI voice for reading prompts streamline gameplay and enhance immersion. Finally, participants emphasized the need for a safe environment between other players; they were worried that introverted players would not give their input. It was suggested to cluster stakeholders in dedicated sessions.</p><p>Finding 4: Contextual Clarity. Participants highlighted the need for additional context surrounding AI prompts to facilitate informed decision-making. They proposed introducing a game board element displaying complementary information.</p><p>Finding 5: Understanding of AI in mHealth. The game facilitated an understanding of AI's real-world implications as it revealed participants' varied comfort levels with sharing different data (e.g., social media vs. physiological) for an AI-driven mHealth tool.</p><p>Finding 6: Influence of Phrasing. Participants identified potential bias from prompt wording and tone. They recommended neutral language while acknowledging humor's role in fostering curiosity and engagement.</p><p>Finding 7: Digital Format. While some participants appreciated the physical format, transitioning to an online platform was viewed favorably. This could enable new mechanics like nuanced scoring and virtual moderation. Participants believed that an online version would be more accessible and inclusive, potentially reaching a wider audience beyond physical group settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Discussion</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Game Adherence to Objectives</head><p>Pilot testing demonstrated the serious game's potential to achieve its key objectives. First, the game promises to attract participants (Objective 1) through gamification elements like badges and personalized user types, offering an enjoyable and stimulating self-discovery experience. Moving forward, clarifying discussion goals and rewarding participation could enhance engagement further.</p><p>Second, data privacy concerns raised during gameplay highlight the game's ability to guide the player in learning the implications of AI in mHealth (Objective 2). A storytelling dynamic holds the potential to further contextualize different uses of AI in mHealth. Hence, this gamified approach seems suitable for including stakeholders with low AI literacy in the design process. In addition to assessing ethical trade-offs between data privacy and potential health outcomes, this gamified tool could be leveraged in other stages in the design process, e.g. to assess the ease of use of prototypes or evaluate stakeholders' feeling of empowerment in the co-creation of new technology. Despite such opportunities, there is a need to explore how such a gamified approach could be scaled without a cumbersome effort in adapting the game to new use cases.</p><p>Third, the structured format encourages stakeholders to debate the ethical implications of AI technologies (Objective 3). When players shared their decision-making process between different ethical human reactions, they provided nuanced insights that may inform AI developers in making design decisions. In future work, each game card or AI prompt could be linked to an AI development decision, where quantitative analysis of the players' choices may translate stakeholders' values into actionable insights in alignment with Trustworthy AI principles.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Future directions</head><p>Three possible future directions emerge to refine the game in subsequent design iteration:</p><p>1. In-person Digital Approach: Moving the game to a digital platform while preserving its engaging elements could enhance accessibility and scalability. A digital version could introduce nuanced scoring mechanisms, virtual moderation, and incorporate additional contextual information to improve the game's effectiveness and reduce response bias. 2. Blended Approach: Combining elements of the paper-based game with digital components offers the advantages of both formats. This approach could maintain tangible interaction with physical cards while integrating online features for enhanced scoring, moderation, and broader engagement across different settings. It would cater to diverse preferences and maximize the game's impact. 3. Digital Survey Approach: A digital survey format could target stakeholders who may be unwilling to dedicate time to gameplay but whose input remains valuable for AI system design. While this approach could scale distribution and provide more representative data, it has fewer gamification opportunities for engagement (Objective 1) and may sacrifice the nuanced personal values that emerge from meaningful discussions during gameplay (Objective 3), which are not trivially realized in online settings.</p><p>In future research, choosing the most suitable approach depends on the desired engagement, accessibility, and depth of insights needed for ethical AI design and development, where A/B testing could provide further insight. Further exploration and refinement are crucial for maximizing the game's potential.</p><p>Future validation efforts should involve broader testing with diverse stakeholder groups beyond academic researchers and longitudinal studies to assess the game's impact on stakeholders' attitudes and decision-making processes over time, establishing it as a reliable tool for promoting responsible and trustworthy AI development.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Featuring a gamified engagement, a deeper understanding of AI applications, and in-depth ethical discussions, this gamified approach shows promise as a tool to support the development of trustworthy AI in mHealth aligned with stakeholder values. Further refinement efforts could explore a fully digital format prioritizing accessibility and nuanced scoring, a blended physical-digital approach, or even a streamlined online survey, depending on the desired balance between engagement, accessibility, and depth of stakeholder insights gleaned.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Black Card sample presenting an AI-generated health recommendation and human reactions.</figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The authors gratefully acknowledge the contributions of focus group researchers for their valuable feedback shaping the next game design iteration.</p></div>
			</div>


			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>(P. Van Gorp) https://orcid.org/0009-0001-7746-4685 (M. I. Ribeiro); https://orcid.org/0000-0001-8746-8826 (L. Genga); https://orcid.org/0000-0003-4693-9980 (M. Simons); https://orcid.org/0000-0001-5197-3986 (P. Van Gorp) 0009-0001-7746-4685 (M. I. Ribeiro); 0000-0001-8746-8826 (L. Genga); 0000-0003-4693-9980 (M. Simons); 0000-0001-5197-3986 (P. Van Gorp)</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Understanding personalization for health behavior change applications: A review and future directions</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kankanhalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhao</surname></persName>
		</author>
		<idno type="DOI">10.17705/1thci.00152</idno>
	</analytic>
	<monogr>
		<title level="j">AIS Transactions on Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="316" to="349" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Preventing repeated real world AI failures by cataloging incidents: The AI incident database</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mcgregor</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence , Thirty-Third Conference on Innovative Applications of Artificial Intelligence, The Eleventh Symposium on Educational Advances in Artificial Intelligence</title>
				<meeting>the Thirty-Fifth AAAI Conference on Artificial Intelligence , Thirty-Third Conference on Innovative Applications of Artificial Intelligence, The Eleventh Symposium on Educational Advances in Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2009">2021 Feb 2-9</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<idno type="DOI">10.1609/AAAI.V35I17.17817</idno>
		<title level="m">Virtual Event</title>
				<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="15458" to="15463" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Directorate-General for Communications Networks</title>
		<author>
			<persName><forename type="first">E</forename><surname>Commission</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename></persName>
		</author>
		<idno type="DOI">10.2759/346720</idno>
	</analytic>
	<monogr>
		<title level="m">Technology, Ethics guidelines for trustworthy AI</title>
				<imprint>
			<publisher>Publications Office</publisher>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Directorate-General for Communications Networks</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A T</forename></persName>
		</author>
		<idno type="DOI">10.2759/002360</idno>
	</analytic>
	<monogr>
		<title level="m">The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment</title>
				<imprint>
			<publisher>Publications Office</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">An overview of artificial intelligence ethics</title>
		<author>
			<persName><forename type="first">C</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yao</surname></persName>
		</author>
		<idno type="DOI">10.1109/TAI.2022.3194503</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="799" to="819" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The limitations of user-and human-centered design in an ehealth context and how to move beyond them</title>
		<author>
			<persName><forename type="first">L</forename><surname>Van Velsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ludden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Grünloh</surname></persName>
		</author>
		<idno type="DOI">10.2196/37341</idno>
	</analytic>
	<monogr>
		<title level="j">J Med Internet Res</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page">e37341</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Privacy by design</title>
		<author>
			<persName><forename type="first">P</forename><surname>Schaar</surname></persName>
		</author>
		<idno type="DOI">10.1007/s12394-010-0055-x</idno>
	</analytic>
	<monogr>
		<title level="j">Identity in the Information Society</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="267" to="274" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Value sensitive design and information systems</title>
		<author>
			<persName><forename type="first">B</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">H</forename><surname>Kahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Huldtgren</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-94-007-7844-3_4</idno>
	</analytic>
	<monogr>
		<title level="m">Early engagement and new technologies: Opening up the laboratory</title>
				<editor>
			<persName><forename type="first">N</forename><surname>Doorn</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Schuurbiers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Van De Poel</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Gorman</surname></persName>
		</editor>
		<meeting><address><addrLine>Netherlands</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="55" to="95" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A survey of value sensitive design methods</title>
		<author>
			<persName><forename type="first">B</forename><surname>Friedman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Hendry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borning</surname></persName>
		</author>
		<idno type="DOI">10.1561/1100000015</idno>
		<ptr target="https://www.nowpublishers.com/article/Details/HCI-015.doi:10.1561/1100000015" />
	</analytic>
	<monogr>
		<title level="j">Foundations and Trends® in Human-Computer Interaction</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="63" to="125" />
			<date type="published" when="2017">2017</date>
			<publisher>Now Publishers, Inc</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The design cycle</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Wieringa</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-662-43839-8_3</idno>
	</analytic>
	<monogr>
		<title level="m">Design Science Methodology for Information Systems and Software Engineering</title>
				<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="27" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Mda: A formal approach to game design and game research</title>
		<author>
			<persName><forename type="first">R</forename><surname>Hunicke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Leblanc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zubek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Nineteenth AAAI Conference on Artificial Intelligence</title>
				<meeting>the Nineteenth AAAI Conference on Artificial Intelligence<address><addrLine>San Jose, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004-07-25">2004 Jul 25-29. 2004</date>
			<biblScope unit="volume">4</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Gamification as a strategy of internal marketing</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L R</forename><surname>Robledo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">N</forename><surname>Lucena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Arenas</surname></persName>
		</author>
		<idno type="DOI">10.3926/ic.455</idno>
	</analytic>
	<monogr>
		<title level="j">Intangible Capital</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1113" to="1144" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
