<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.1007/s12394-010-0055-x</article-id>
      <title-group>
        <article-title>Promoting Trustworthy AI in mHealth: a Gamified Approach to Value-Sensitive Design</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maria Inês Ribeiro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laura Genga</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Monique Simons</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pieter Van Gorp</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Technical University of Eindhoven Eindhoven</institution>
          ,
          <country country="NL">Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Wageningen University &amp; Research Wageningen</institution>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>4</volume>
      <fpage>10</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>The rise of mobile health (mHealth) apps leveraging AI and wearables to promote healthy lifestyles is accompanied by growing ethical concerns among the public, developers, and policymakers. While AI guidelines exist to mitigate concerns, translating them to practical design requirements remains challenging. This research proposes a gamified approach to help bridge the gap between theory and practice in Value-Sensitive Design (VSD) for AI applications in mHealth. This approach aims to facilitate the development of trustworthy AI by aligning design with stakeholder ethical values. Using the design science methodology, we developed a card game to improve stakeholder participation, foster an understanding of AI in mHealth, and facilitate in-depth ethical discussions. Pilot-testing with 19 peer researchers showed active engagement and motivation of players through self-discovery. The ifndings highlight the game's potential to elicit ethical discussions and promote an understanding of AI's real-world implications. Future iterations could explore digital, blended, or survey formats to enhance engagement, accessibility, and depth of insights, catering to diverse stakeholder preferences. This gamified approach to VSD holds promise as a tool for supporting the development of trustworthy AI technologies in healthcare, aligned with stakeholder values. Further validation with broader stakeholder groups and a longitudinal impact assessment are needed.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;mHealth</kwd>
        <kwd>Trustworthy AI</kwd>
        <kwd>Value-Sensitive Design</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Commission (EC) advocates for a human-centered approach grounded by the ethical principles
of respect for human autonomy, prevention against harm, fairness, and explicability [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. A
self-assessment checklist is available as a tool for AI developers to implement these principles
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Yet, checklist-based approaches lack practical implementation details, leaving developers to
navigate complex ethical dilemmas and tensions between diverse stakeholder ethical values [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
In mHealth apps, conflicts between data privacy and personalized lifestyle recommendations
are particularly evident. For example, an app may collect health and lifestyle data to predict
opportunistic moments for suggesting a walk. This app could bring health benefits with
increased physical activity but also poses privacy risks as sensitive health data could be exposed.
What is then more valuable: data privacy or health benefits?
      </p>
      <p>
        To address these complex trade-ofs, several design approaches can inform the integration
of stakeholder values in the design process of AI technology. While User-Centered Design
prioritizes user experience and Privacy by Design focuses on data protection, VSD ofers a more
comprehensive framework, analyzing AI ethics across individual, group, and societal levels, and
aiming at the symbiotic evolution of technology and societal norms [
        <xref ref-type="bibr" rid="ref6">6, 7, 8</xref>
        ].
      </p>
      <p>
        Multiple methods have been employed to elicit values in VSD, such as the Value Scenario
method, which emphasizes technology implications in narrated use cases, or the Value-oriented
Mock-up, Prototype, or Field Deployment method, which investigates values implications in
realworld contexts [9]. Despite these eforts, current VSD methods face several limitations, often
addressing only one or two of these key challenges: (1) recruiting and engaging stakeholders
in focus groups; (2) providing enough technical and ethical AI understanding to stakeholders,
and (3) eliciting ethical discussions that allow for translating abstract findings into actionable
requirements for AI developers [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>A gamified tool seems intuitively capable of addressing these challenges simultaneously.
First, games are inherently engaging, attracting and retaining stakeholder participation better
than traditional methods. Second, they may simulate complex scenarios and provide immediate
feedback, helping stakeholders grasp AI’s technical and ethical dimensions without prior
expertise. Third, the structured yet flexible nature of games allows for quantitative tracking of
decisions and actions, providing concrete data for actionable design requirements.</p>
      <p>This research proposes to support VSD with a gamified approach. We developed and
pilottested a card game to elicit and explore stakeholder values regarding the use of AI in mHealth
apps. This approach seeks to provide practical insights that can enhance the efectiveness of
VSD in guiding the development of trustworthy AI technology.</p>
      <p>The structure of the remainder of this paper is as follows: Section 2 outlines the methods
used to develop and pilot-test the gamified approach; Section 3 presents the key findings from
the pilot tests; Section 4 discusses the adherence of the game to its objectives and potential
future directions for refining the game; and Section 5 concludes with a summary of key points
and suggestions for further research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <p>In this study, we employed the design science methodology framework to develop and refine
a game exploring stakeholder values and ethical considerations in using AI for mHealth apps
[10]. The overall goal was to provide a practical tool to support VSD. This preliminary version
of the game was designed for a general population, assessing its acceptance of using private
data for generating personalized lifestyle recommendations. This section outlines the game
objectives, design, and pilot testing.</p>
      <sec id="sec-2-1">
        <title>2.1. Game Objectives</title>
        <p>The game aimed to achieve the following objectives:</p>
        <p>Objective 1: Enhance Recruitment and Engagement. Leverage gamification to create an
interactive and captivating experience for stakeholders during focus groups.</p>
        <p>Objective 2: Provide Understanding of AI. Present concrete examples of AI applications
and implications to guide the definition of AI design requirements by assessing ethical concerns
on AI-specific uses.</p>
        <p>Objective 3: Elicit In-Depth Ethical Discussions. Engage stakeholders in structured
discussions on AI in mHealth to gain insights on specific ethical considerations relevant to AI
design and development.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Game Design</title>
        <p>The game design adheres to the Mechanics-Dynamics-Aesthetics framework to create an
engaging exploration of AI ethical considerations in mHealth apps with a trade-of between data
privacy and personalized lifestyle recommendations [11].</p>
        <p>
          To enhance recruitment and engagement (Objective 1), the game ofers intrinsic and extrinsic
rewards. At the beginning of a game session, players were motivated to embark on a
selfdiscovery journey fostering reflection on personal ethical values (intrinsic reward) while earning
AI user-type badges (extrinsic reward). This AI user type was defined based on the prevalence
of each participant’s ethical concerns categorized according to the four ethical principles of
trustworthy AI defined by the EC [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. During the game, participants encountered multiple
ethical dilemmas that required them to weigh competing values and priorities when interacting
with mHealth apps. The game provided a safe and comfortable social environment for open and
honest discussions about ethical concerns, promoting community and empathy among players.
        </p>
        <p>Featuring a card game, the core game mechanics revolved around Black Cards presenting
AI-generated lifestyle recommendations with five possible human reactions (Figure 1), aiming to
promote understanding of AI’s real-world implications (Objective 2). Such prompts were linked
to at least one AI development decision, e.g. ’Is it acceptable to use GPX location to recommend
convenient and nearby walking routes?’. Players individually chose a White Card, labeled from
A to E, reflecting their preferred reaction to the AI prompt, and placed it facing down on the
table. A moderator facilitated discussion in each round as players shared and debated their
choices (Objective 3). Encrypted color coding on Score Keeping Cards tracks player decisions.
Upon game over, players earn the AI User-Type Badges reflecting their ethical priorities based
on gameplay and revealed by the Game Over Card.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>We briefly report the most significant findings and related participants’ suggestions for game
refinement ofered in focus groups.</p>
      <p>Finding 1: Motivation through Self-Discovery. Participants found that uncovering their
AI user type was a strong motivator for participation. While some players found the assigned
type aligned with their values, others desired more rounds for a clearer picture. One participant
recommended using a subset of AI scenarios in digital format as a teaser to recruit players.</p>
      <p>Finding 2: Player Engagement and Relatedness. Participants expressed joy in
gameplay, reporting higher engagement when scenarios resonated with personal experiences. The
alignment of AI prompts with personal interests significantly influenced their reactions and
investment in the gameplay. Participants suggested avoiding overly specific prompts (e.g.,
detailed activities or timing) and incorporating open-ended response options to encourage
imagination and enhance connection to the scenarios.</p>
      <p>Finding 3: Ethical Discussion. Participants engaged in discussions prompted by the game’s
ethical dilemmas, expressing the challenge of selecting a single answer from limited choices. To
address this, they proposed implementing a ranking score system to allow for more nuanced
responses. Some participants were unsure about the benefits of discussions for uncovering
their AI user type. They suggested clarifying discussion goals and ofering incentives for active
participation. Additionally, participants recommended using a centralized moderator and AI
voice for reading prompts to streamline gameplay and enhance immersion. Finally, participants
emphasized the need for a safe environment between other players; they were worried that
introverted players would not give their input. It was suggested to cluster stakeholders in
dedicated sessions.</p>
      <p>Finding 4: Contextual Clarity. Participants highlighted the need for additional context
surrounding AI prompts to facilitate informed decision-making. They proposed introducing a
game board element displaying complementary information.</p>
      <p>Finding 5: Understanding of AI in mHealth. The game facilitated an understanding
of AI’s real-world implications as it revealed participants’ varied comfort levels with sharing
diferent data (e.g., social media vs. physiological) for an AI-driven mHealth tool.</p>
      <p>Finding 6: Influence of Phrasing. Participants identified potential bias from prompt
wording and tone. They recommended neutral language while acknowledging humor’s role in
fostering curiosity and engagement.</p>
      <p>Finding 7: Digital Format. While some participants appreciated the physical format,
transitioning to an online platform was viewed favorably. This could enable new mechanics
like nuanced scoring and virtual moderation. Participants believed that an online version would
be more accessible and inclusive, potentially reaching a wider audience beyond physical group
settings.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <sec id="sec-4-1">
        <title>4.1. Game Adherence to Objectives</title>
        <p>Pilot testing demonstrated the serious game’s potential to achieve its key objectives. First,
the game promises to attract participants (Objective 1) through gamification elements like
badges and personalized user types, ofering an enjoyable and stimulating self-discovery
experience. Moving forward, clarifying discussion goals and rewarding participation could enhance
engagement further.</p>
        <p>Second, data privacy concerns raised during gameplay highlight the game’s ability to guide
the player in learning the implications of AI in mHealth (Objective 2). A storytelling dynamic
holds the potential to further contextualize diferent uses of AI in mHealth. Hence, this gamified
approach seems suitable for including stakeholders with low AI literacy in the design process.
In addition to assessing ethical trade-ofs between data privacy and potential health outcomes,
this gamified tool could be leveraged in other stages in the design process, e.g. to assess the
ease of use of prototypes or evaluate stakeholders’ feeling of empowerment in the co-creation
of new technology. Despite such opportunities, there is a need to explore how such a gamified
approach could be scaled without a cumbersome efort in adapting the game to new use cases.</p>
        <p>Third, the structured format encourages stakeholders to debate the ethical implications of
AI technologies (Objective 3). When players shared their decision-making process between
diferent ethical human reactions, they provided nuanced insights that may inform AI developers
in making design decisions. In future work, each game card or AI prompt could be linked to
an AI development decision, where quantitative analysis of the players’ choices may translate
stakeholders’ values into actionable insights in alignment with Trustworthy AI principles.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Future directions</title>
        <p>Three possible future directions emerge to refine the game in subsequent design iteration:
1. In-person Digital Approach: Moving the game to a digital platform while preserving
its engaging elements could enhance accessibility and scalability. A digital version could
introduce nuanced scoring mechanisms, virtual moderation, and incorporate additional
contextual information to improve the game’s efectiveness and reduce response bias.
2. Blended Approach: Combining elements of the paper-based game with digital
components ofers the advantages of both formats. This approach could maintain tangible
interaction with physical cards while integrating online features for enhanced scoring,
moderation, and broader engagement across diferent settings. It would cater to diverse
preferences and maximize the game’s impact.
3. Digital Survey Approach: A digital survey format could target stakeholders who may
be unwilling to dedicate time to gameplay but whose input remains valuable for AI system
design. While this approach could scale distribution and provide more representative data,
it has fewer gamification opportunities for engagement (Objective 1) and may sacrifice
the nuanced personal values that emerge from meaningful discussions during gameplay
(Objective 3), which are not trivially realized in online settings.</p>
        <p>In future research, choosing the most suitable approach depends on the desired
engagement, accessibility, and depth of insights needed for ethical AI design and development, where
A/B testing could provide further insight. Further exploration and refinement are crucial for
maximizing the game’s potential.</p>
        <p>Future validation eforts should involve broader testing with diverse stakeholder groups
beyond academic researchers and longitudinal studies to assess the game’s impact on
stakeholders’ attitudes and decision-making processes over time, establishing it as a reliable tool for
promoting responsible and trustworthy AI development.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>Featuring a gamified engagement, a deeper understanding of AI applications, and in-depth
ethical discussions, this gamified approach shows promise as a tool to support the development
of trustworthy AI in mHealth aligned with stakeholder values. Further refinement eforts
could explore a fully digital format prioritizing accessibility and nuanced scoring, a blended
physical-digital approach, or even a streamlined online survey, depending on the desired balance
between engagement, accessibility, and depth of stakeholder insights gleaned.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The authors gratefully acknowledge the contributions of focus group researchers for their
valuable feedback shaping the next game design iteration.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kankanhalli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>Understanding personalization for health behavior change applications: A review and future directions</article-title>
          ,
          <source>AIS Transactions on HumanComputer Interaction</source>
          <volume>13</volume>
          (
          <year>2021</year>
          )
          <fpage>316</fpage>
          -
          <lpage>349</lpage>
          . doi:
          <volume>10</volume>
          .17705/1thci.
          <fpage>00152</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>S. McGregor</surname>
          </string-name>
          ,
          <article-title>Preventing repeated real world AI failures by cataloging incidents: The AI incident database</article-title>
          ,
          <source>in: Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence , Thirty-Third Conference on Innovative Applications of Artificial Intelligence</source>
          ,
          <source>The Eleventh Symposium on Educational Advances in Artificial Intelligence; 2021</source>
          Feb 2
          <article-title>-9; Virtual Event</article-title>
          , AAAI Press,
          <year>2021</year>
          , pp.
          <fpage>15458</fpage>
          -
          <lpage>15463</lpage>
          . doi:
          <volume>10</volume>
          .1609/AAAI.V35I17.17817.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Directorate-General for Communications Networks, Technology, Ethics guidelines for trustworthy AI</article-title>
          ,
          <string-name>
            <surname>Publications</surname>
            <given-names>Ofice</given-names>
          </string-name>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .2759/346720.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          , C. a. T. Directorate-
          <article-title>General for Communications Networks, The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment</article-title>
          ,
          <source>Publications Ofice</source>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .2759/002360.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>C.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <article-title>An overview of artificial intelligence ethics</article-title>
          ,
          <source>IEEE Transactions on Artificial Intelligence</source>
          <volume>4</volume>
          (
          <year>2022</year>
          )
          <fpage>799</fpage>
          -
          <lpage>819</lpage>
          . doi:
          <volume>10</volume>
          .1109/TAI.
          <year>2022</year>
          .
          <volume>3194503</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>L. van Velsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ludden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Grünloh</surname>
          </string-name>
          ,
          <article-title>The limitations of user-and human-centered design in an ehealth context and how to move beyond them</article-title>
          ,
          <source>J Med Internet Res</source>
          <volume>24</volume>
          (
          <year>2022</year>
          )
          <article-title>e37341</article-title>
          . doi:
          <volume>10</volume>
          .2196/37341.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>