<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>When no one shows up (at first) : Navigating the uncertainties of participatory workshops in interdisciplinary research</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Monique Munarini</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Pisa, Largo Bruno Pontecorvo</institution>
          ,
          <addr-line>3, Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This reflective paper explores often-unspoken challenges of designing and facilitating co-design and participatory workshops, ofering practical strategies for early career researchers (ECRs) navigating these methods. Drawing from personal experience conducting a series of workshops titled “How to Think About Equity in the AI Ecosystem?”, it follows the full arc of the workshop experience-from conceptualization and activity planning to participant recruitment and facilitation-ofering a grounded account of what happens when participation doesn't go as expected. The paper examines the methodological challenges of engaging non-expert participants, particularly when operating without institutional support, financial incentives, or integration into larger events. Despite initial dificulties-such as low attendance-the workshop fostered rich discussions among a demographically diverse group and ultimately led to one participant volunteering to co-facilitate a subsequent session. This transition from participant to co-facilitator exemplifies the redistribution of epistemic authority, positioning lived experience as central to research and engagement practices. By reframing perceived failure as a productive site of learning, the paper ofers practical strategies for ECRs working across disciplines who often navigate unfamiliar methodological terrains contributing to broader conversations on the realities of doing interdisciplinary, participatory work in practice.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Equity</kwd>
        <kwd>Participatory research</kwd>
        <kwd>Co-design workshops</kwd>
        <kwd>Artificial intelligence</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Participatory research ofers an opportunity to shift the power dynamics of knowledge production
by bringing afected communities into the design and evaluation of reliable AI systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Yet, the
practice of organizing such spaces—particularly as an early career researcher navigating interdisciplinary
terrain—rarely goes as smoothly as the literature implies. This paper reflects on the design, delivery,
and lessons learned from a series of participatory workshops titled “How to Think About Equity in the
AI Ecosystem?”. Developed as part of a broader inquiry into equitable AI governance [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the workshops
were designed to bring together two often disconnected groups: AI practitioners and those afected by
AI systems, with a particular focus on youth—who are positioned to experience the long-term societal
impacts of algorithmic decision-making.
      </p>
      <p>
        The workshops focused on AI systems in recruitment because access to employment is foundational
to economic security, social inclusion, and the protection of human rights. Marginalised groups—such
as racialised individuals, migrants, people with disabilities, and gender-diverse persons—often face
systemic barriers to labour market access. The increasing adoption of AI-based tools in hiring, while
often framed as eficiency-driven or bias-reducing, has revealed significant risks of deepening these
inequalities. Notable cases include Amazon’s use of a hiring algorithm that systematically downgraded
applications from women due to historical data bias [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and job advertisement algorithms that failed
to show employment opportunities to certain groups—such as women or older candidates—because
of biased optimisation for engagement metrics [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Across three sessions, the workshops aimed to
explore how equity could be meaningfully embedded into the development of AI systems, particularly
in high-stakes domains such as automated hiring. Each session included interactive activities, co-design
exercises, and a case study on recruitment tools to support participants in understanding and critiquing
how equity operate within these systems. While the workshop for AI practitioners saw high engagement
with 14 participants, the sessions involving youth—representing afected groups—revealed the often
unpredictable nature of public engagement. One session began with only a single participant in the
room and was nearly cancelled before three additional attendees arrived. Another session had a stronger
turnout of 11 participants but presented its own logistical and facilitation challenges.
      </p>
      <p>These moments of uncertainty are rarely acknowledged in academic reporting, yet they carry
important methodological insights—especially for early career researchers with limited funding, institutional
support, or disciplinary training in participatory methods. This paper ofers a reflection on the process
of developing and hosting participatory workshops, with a particular focus on the session with low
attendance. Rather than framing the experience as a failure, it is reframed as a valuable point of
learning: an invitation to rethink how we define success in participatory work and how we prepare for
unpredictability.</p>
      <p>This paper ofers a reflection-in-practice, structured around the key stages of workshop development
and facilitation. It focuses on the process of implementing participatory workshops as a methodological
approach—rather than reporting full data analysis or outcomes. While insights from participant
discussions are referenced to illustrate key points, the systematic analysis of workshop data and
indicators co-designed during the sessions will be presented in a separate, forthcoming publication.
First, this paper introduces the motivation and structure of the workshop series, detailing the design
process and the specific goals of engaging both AI practitioners and afected groups—particularly
youth. The following section ofers an observational narrative of the most unpredictable session—where
attendance was nearly zero at first—highlighting practical adaptations made in real time. Building on
this, the paper reflects on what worked, what did not, and how we could reframe such experiences as
methodologically generative rather than failed. The final section ofers practical recommendations for
early-career researchers undertaking similar work under resource constraints, arguing that small-scale
participatory practices still hold valuable lessons.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Planning the workshop</title>
      <p>
        The series of workshops titled “How to think about equity in the AI ecosystem?” were designed to
explore how equity could be integrated into AI systems, particularly within automated hiring. The final
goal is to operationalise the equity definition ’Providing meaningful access to the necessary resources for
individuals who need it to belong to a community’ proposed by [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].This definition draws on feminist
theory and the design justice approach, emphasizing not equality of input, but rather redistributive
measures tailored to historical and structural exclusions. In this context, “belonging” refers not merely
to inclusion, but to a sense of recognition, participation, and influence in sociotechnical decision-making
spaces. The title refers to AI ecosystems as developed by Stahl [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] that AI systems are developed and
deployed within multiple relational fields of knowledge, in analogy of a biological ecosystem. The
workshop draws from participatory design traditions that emphasise mutual learning, horizontal power
dynamics, and situated knowledges, treating participants not as users or test subjects but as co-creators
of socio-technical imaginaries [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. This research chose to embrace the challenge of working with the
“messy middle” as defined by [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] as a combination between the perspectives of AI practitioners and
afected communities. This paper will focus on one of the workshops in this series. This specific session
targeted youth, broadly defined as individuals aged approximately 18 to 35, encompassing millennials
and Generation Z. This age group was chosen because they are poised to experience the long-term
societal consequences of AI systems, especially in early- and mid-career stages where hiring algorithms
may shape access to employment and mobility opportunities. Rather than defining youth narrowly by
age, eligibility criteria were kept intentionally broad: participants needed only to not be AI experts and
to identify with a group potentially subject to discrimination in hiring. This flexible framing aimed
to centre lived experience and inclusivity over disciplinary credentials. Ten individuals registered in
advance. The group represented a wide range of disciplinary and professional backgrounds—including
Business, Law, Computer Science, Engineering, Political Science, and work in the third sector—and
included participants from both the Global North and Global Majority. They self-identified across a
variety of gender identities and generational categories. This demographic diversity was essential to the
project’s goal of surfacing intersectional perspectives on equity in algorithmic systems. Recruitment
was conducted in collaboration with a youth-focused civic organisation, and was promoted via LinkedIn
and Instagram by both the host and the NGO. Unlike the earlier workshop with AI practitioners, where
all discussions were recorded, the youth workshop adopted a more cautious approach to documentation:
only the final group presentations were recorded, creating a safer environment in case sensitive or
personal experiences emerged.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Pre-Workshop design</title>
        <p>A registration form helped inform the facilitation plan and ensure demographic diversity within small
groups. In addition, express consent was required on the registration form to collect the demographic
data. Participants were asked to share:(i) basic demographics (e.g., gender identity, generation), (ii)
self-assessed AI expertise (on a scale from 1 to 5), (iii) professional background, (iv) whether they
belonged to a vulnerable group or marginalised group (e.g., based on migration status, race, disability,
gender identity, socioeconomic background). These categories were self-defined by participants and
helped situate how algorithmic systems may diferently impact individuals depending on intersecting
structural conditions, (v) their definition of equity. The registration form asked for an identifier that
could be anything from initials, to colours, to nicknames that the participants needed to remember as
they would be divided by groups according to the identifier provided. In addition, no email or contact
information requested.</p>
        <p>
          This data helped anticipate group dynamics and guided the introductory framing. Inspired by
critical pedagogy [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], the workshop functions as both a site of collective inquiry and an educational
space, aimed at building AI literacy as a form of civic empowerment. The facilitator developed a
10-minute presentation covering foundational concepts targeting a basic AI literacy: the hype around
AI, technosolutionism, ethics-washing [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ], and algorithmic bias [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. These ideas were illustrated
with real-world case studies, such as AI tools that delivered discriminatory job ads based on gendered
assumptions [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. To balance critique with possibility, the presentation also introduced a chatbot
designed by a South African civil society organisation using participatory methods [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. To avoid
framing AI systems solely in negative or deterministic terms, the workshop introduced “Question
Zero” [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]—an ethical prompt that asks: Is it necessary to deploy an AI system in this context at all?
This framing encouraged participants to step back from assumptions about innovation and consider
whether algorithmic intervention was justified in the first place. As a conclusion in the presentation,
the explanation about how initiatives such as the one attended by the participants are essential to turn
community engagement into empowerment and how this can be beneficial to society. Drawing on the
principles of design justice [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], the workshops seek to redistribute not only access to technology but
also the power to shape its development—foregrounding lived experience from insiders and outsiders of
the AI lifecycle as central to ethical and equitable design processes. This framed participatory research
not merely as data collection, but as a pathway to community empowerment, in other words, how
groups that do not usually participate in the design, deployment and evaluation of AI systems can share
their concerns and demands.
        </p>
        <p>The workshop was developed in collaboration with a youth-led association, an NGO known for its
sustained engagement with young people on issues of civic participation, human rights, and education.
The organisation welcomed the opportunity to incorporate the workshop within its broader
dissemination and outreach activities. They actively promoted the event via its social media channels and website,
and played a crucial role in securing a venue. The workshop was hosted at a partner research center
afiliated with the NGO, which provided the space free of charge. It is also essential to note that none of
the participants, nor the association itself, received financial compensation for their involvement.</p>
        <p>While no identifying contact information was collected, participants were invited to express
interest in future collaboration during the workshop itself. One participant subsequently volunteered to
co-facilitate the next session. This informal pathway allowed for ongoing engagement without
compromising privacy or requiring the collection of sensitive data. Also, it was agreed that the youth-based
association would share on their social media the future works produced with the workshop.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Workshop Agenda</title>
        <p>The workshop was structured as follows:</p>
        <p>Time
10 min
15 min
5 min
15 min
5 min
50 min
15 min</p>
        <p>Session
Intro
AI Practice
Debriefing
Equity
Break
Indicators
Presentations</p>
        <p>Objective
Introduce AI literacy concepts
Exchange and consolidate ideas about AI
Discuss the results of the AI practice
Develop understanding of equity
Cofee and pastry break
Co-create potential indicators covering who, what,
how, and why
Groups present their work and facilitate a group-wide
discussion</p>
        <p>
          The AI practice activity was conceived as an icebreaker using ideation cards [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] to develop a deeper
understanding of key concepts and ethical questions around AI systems.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Activity 1: Understanding Equity</title>
        <p>
          The first major activity deepened participants’ understanding of equity by testing the working definition
proposed by [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]: ’Providing meaningful access to the necessary resources for individuals who need it to
belong to a community’ in recruitment contexts. Before the start of the activity,participants would be
divided into small groups (3–7 people). The task would unfold in two stages: 1) Individual brainstorming
(5 min): Participants wrote examples from recruitment processes on post-its, noting whether each
was equitable, the identity categories involved (e.g. homeless, international student), and whether the
example applied to them personally. 2) Group sharing (10 min): Teams placed their post-its along the
equity spectrum and discussed how these examples illustrated or challenged the definition. Using large
A1 sheets representing a spectrum from “no equity” to “equity,” participants began with individual
brainstorming: they wrote examples from hiring processes on post-its—such as unpaid internships,
algorithmic filtering of foreign names, or accommodations for disability disclosures. These examples
included the identity groups impacted (e.g., international students, people with care responsibilities) and
whether they personally related to them. In group discussions, post-its were mapped onto the equity
spectrum to spark dialogue about what the definition captures—and where it falls short. This activity
served as a warm-up for the more complex co-design phase that followed. This activity was designed
not only to surface concrete insights into equity in recruitment processes but also to prime participants
for the more complex design thinking of the second half where AI systems would be included in the
process.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Activity 2: Co-designing equity indicators</title>
        <p>
          Building on the working definition of equity, the second activity focused on co-designing
indicators—deifned as “points of action” that can be used to evaluate whether an AI system supports equitable
outcomes. Using a fictional AI hiring system called ARIA deployed by the recruitment company La
Dolce Vita, developed specifically for this workshop as a composite example inspired by the Amazon
case [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Working with the fictional AI hiring tool ARIA, participants used the prompt format “A
[person/role] should [action]” and linked each indicator to a phase in the hiring pipeline, including a
“Question Zero” stage that asks: Why use AI in this context at all? (e.g., “HR manager should review
rejected applications flagged by the system”). Each indicator was attached to one or more phases of the
AI hiring pipeline [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], see Figure 1. Examples of indicators included:
        </p>
        <p>“The HR could know more about the logical chain of the background of the position and then use
this to screen candidates with better keywords”</p>
        <p>“Hiring manager should prepare a set of questions that show applicants that the company really
cares about diversity”
“Candidates should be given the possibility to do the same interview in diferent formats”
“Hiring platforms should clearly explain how gamification assesses learning ability (after acquiring
the job)”</p>
        <p>Each group also discussed who the indicators targeted (e.g., recruiter, developer, platform owner),
what they revealed (e.g., transparency, bias, oversight), and where they fit in the recruitment process
(e.g., sourcing, shortlisting, final selection).</p>
        <p>This activity bridged abstract ethical concerns and practical accountability tools, empowering
participants to contribute to real-world design logics—even without technical backgrounds.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Day of the workshop</title>
      <p>The workshop was scheduled to begin with ten registered participants. However, at the designated start
time, only one person was present. This moment marked a critical inflection point in the facilitation
process: whether to proceed, postpone, or adapt. Opting for the latter, the organiser extended the
welcome period. Within the next forty minutes, three additional individuals joined—two of whom had
not formally registered. The final group consisted of four participants, each with distinct academic
and professional trajectories. Though smaller than anticipated, the group composition ofered an
opportunity for a more intimate and dialogic mode of engagement. Accordingly, the original facilitation
plan—structured around small-group breakout activities and collective synthesis—was adapted in
real time to accommodate a single, continuous group dialogue. Their difering life experiences and
positionalities shaped the discussion of equity and AI in rich and grounded ways—illustrating that
epistemic value does not scale linearly with participant numbers.</p>
      <p>There were four main positive outcomes from this new scenario. First, participants were able to
speak at length about their personal experiences, professional aspirations, and perceptions of AI-driven
systems. Second, the icebreaker and two activities were repurposed as group conversations rather
than segmented exercises, which increased engagement. Third, the facilitator was able to provide
tailored clarification of concepts and create space for afective responses around AI systems. Fourth, in
comparison to the other workshops that hold diferent groups, it was easier to take observational notes.</p>
      <p>Importantly, the absence of audiovisual recording—save for final presentations—enhanced the
perceived safety of the space, encouraging openness and vulnerability. While this session did not meet
expectations in terms of participant numbers, it arguably exceeded them in terms of epistemic richness
and relational depth. The experience reinforced the importance of flexibility in participatory research
design and highlighted the value of micro-scale engagements in surfacing contextually grounded
insights.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Strategies for future workshops</title>
      <p>While the second workshop experienced significant initial challenges in participant turnout, it ultimately
fulfilled its core aim: fostering a co-design space where diverse, non-expert voices could engage critically
with AI systems used in recruitment. Despite beginning with only one attendee and facing the possibility
of cancellation, the group eventually grew to four participants—two of whom were not previously
registered. Importantly, this small group still embodied a range of gender identities, professional
backgrounds, and geographies (Global North and Global Majority), aligning with the diversity goals
outlined in the registration process. This diversity enriched the discussions and demonstrated that
even with modest numbers, meaningful deliberation and learning can emerge when intersectional
perspectives are present. One key factor that contributed to the workshop’s success was the collaborative
spirit of the participants and their interest in the topic of AI and equity. The active engagement of these
individuals highlighted the value of designing workshops that prioritise inclusive participation and
accessible language, especially when working with non-expert audiences.</p>
      <p>Reflecting on the logistical and organisational dimensions, several strategic insights emerged for
ECRs aiming to replicate similar initiatives under resource constraints:
• Embed the workshop within a larger event: Hosting a workshop as part of a broader
conference, symposium, or institutional programme significantly improves visibility, accessibility, and
logistical ease. It also typically ensures access to basic infrastructure—such as a venue and
refreshments—which can reduce both cost and coordination burden. More importantly, such contexts
often come with a pre-registered audience, which increases the likelihood of turnout and ofers
opportunities for serendipitous engagement. All the other workshops followed this strategy and
had a significant higher amount of participants.
• Build on existing networks and collaborations: Establishing connections with practitioners in
participatory research in the AI ecosystem early in the design process proved invaluable. These
connections provided not only critical feedback on the workshop structure but also ofered
opportunities for mutual learning and idea exchange. Informal conversations with professionals
and academics helped refine the framing of the activities and the case study, and created pathways
for further collaboration.
• Leverage prior experience to build facilitation capacity: The development of the workshop was
also rooted in the facilitator’s prior involvement in other co-design initiatives. Experience gained
through participating in workshops in diferent projects helped build the confidence, adaptability,
and methodological toolkit needed to design and implement a workshop closely aligned with the
aims of a doctoral research project.
• Design strategies to remind registered participants: As the registration form did not collected
identifiable data such as email, participants could not be reminded about the event. Using tools
such as iCalendar or anonymous RSVP systems may help reduce no-shows while maintaining
participant privacy. In this project, the decision not to collect contact information—such as names
or emails—was based on ethical and methodological concerns, particularly the aim of lowering the
barrier for marginalised or privacy-conscious individuals to participate in critical conversations
about technologies that may already surveil or profile them.</p>
      <p>
        One particularly meaningful outcome emerged after the second workshop, when a participant
expressed a strong interest in the topic and volunteered to co-facilitate the third workshop. This
transition — from participant to co-facilitator — illustrates what Sandra Harding [19] and Patricia
Collins [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] might describe as a redistribution of epistemic authority, where lived experience and situated
knowledge are not only valued but become integral to the design and facilitation process itself. This
shift challenges traditional researcher–participant hierarchies and aligns with the principles of feminist
participatory research [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], which emphasize reciprocity, mutual learning, and empowerment [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
The participant’s previous engagement added depth to the facilitation process, and their familiarity
with the activities helped foster a more inclusive and collaborative environment. This movement
suggests the potential of participatory workshops as not just data collection tools, but also as spaces
of capacity-building and knowledge co-production — where ownership of the process can extend
beyond the researcher’s initial vision. Although small in scale, the workshop generated outcomes
that extended beyond anecdote. Participants collectively surfaced indicators for equitable AI systems,
critically examined real-world equity challenges in hiring platforms, and one participant transitioned
into a co-facilitator role for the next session. These instances reflect measurable redistributions of
epistemic authority and support the view that participatory research—when conducted inclusively
and reflexively—can yield both conceptual insight and practical transformation, even under resource
constraints.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This paper has presented a reflective account of a participatory workshop series designed to explore
equity in the AI ecosystem, particularly in the context of recruitment systems. Framed as a
methodological reflection, the paper shares the challenges, unexpected moments, and practical strategies involved
in designing and facilitating co-design workshops with afected groups—specifically identified as likely
to face long-term impacts of algorithmic decision-making.</p>
      <p>A central lesson emerging from this experience is that diversity in participation matters more than
the number of attendees. Even when turnout was lower than expected, the presence of participants
from varied gender identities, geographies, and professional backgrounds created meaningful dialogue
and contributed to a deeper exploration of equity.</p>
      <p>The experience also revealed how practical constraints—such as the absence of a larger hosting event
or funding for participant recruitment—can significantly shape outcomes. Embedding participatory
activities within broader events or institutional structures can ease logistical pressures, enhance visibility,
and increase turnout. At the same time, building relationships with practitioners and drawing on
previous facilitation experience emerged as essential strategies to strengthen both the workshop
design and delivery. This paper does not claim to ofer a universal model for ECRs working with
participatory design in AI governance. Rather, it seeks to contribute to the growing body of work that
takes participatory research seriously as both method and ethos. By sharing a grounded account of
what worked, what did not, and how small-scale experiences can still generate valuable insights, this
reflection aims to support other early-career researchers navigating similar paths—especially those
working across disciplinary boundaries and without guaranteed institutional support. In doing so, it
reframes workshop ‘failures’ not as dead-ends, but as part of a productive learning process, where
reflective practice itself becomes a mode of knowledge production. Co-designing equitable AI systems
is not only a socio-technical challenge, but a relational and situated one—and participatory workshops,
even with their uncertainties, remain a vital space for this work to unfold.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used GPT-4 and Grammarly in order to: Grammar
and spelling check. After using these tool(s)/service(s), the author reviewed and edited the content as
needed and take full responsibility for the publication’s content.
[19] S. Harding, Whose Science? Whose Knowledge? Thinking from Women’s Lives, Cornell University
Press, Ithaca, NY, 1991.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Birhane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Isaac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Prabhakaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Diaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Elish</surname>
          </string-name>
          , I. Gabriel, S. Mohamed,
          <article-title>Power to the people? opportunities and challenges for participatory ai, Association for Computing Machinery</article-title>
          , New York, NY, USA,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Munarini</surname>
          </string-name>
          ,
          <article-title>Practicing equity in the ai ecosystem: Co-designing solutions for sociotechnical challenges</article-title>
          ,
          <source>in: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2025</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dastin</surname>
          </string-name>
          ,
          <article-title>Amazon scraps secret ai recruiting tool that showed bias against women, in: Ethics of data and analytics</article-title>
          ,
          <source>Auerbach Publications</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sapiezynski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bogen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Korolova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mislove</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rieke</surname>
          </string-name>
          ,
          <article-title>Discrimination through optimization: How facebook's ad delivery can lead to biased outcomes 3 (</article-title>
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Munarini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brusseau</surname>
          </string-name>
          , L. Angeli,
          <article-title>Equitable ai audits: evaluating the evaluators in today's world</article-title>
          ,
          <source>in: Proceedings of the 17th International Conference on Theory and Practice of Electronic Governance</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B. C.</given-names>
            <surname>Stahl</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence for a better future: an ecosystem perspective on the ethics of AI and emerging digital technologies</article-title>
          , Springer Nature,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Torre</surname>
          </string-name>
          ,
          <article-title>Re-membering exclusions: Participatory action research in public institutions</article-title>
          ,
          <source>Qualitative Research in Psychology 1</source>
          (
          <year>2004</year>
          )
          <fpage>15</fpage>
          -
          <lpage>37</lpage>
          . doi:
          <volume>10</volume>
          .1191/1478088704qp004oa.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P. H.</given-names>
            <surname>Collins</surname>
          </string-name>
          , Black Feminist Thought:
          <article-title>Knowledge, Consciousness, and the Politics of Empowerment</article-title>
          , 2nd ed.,
          <string-name>
            <surname>Routledge</surname>
          </string-name>
          , New York,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <article-title>A framework and self assessment workbook for including public voices in ai, 2025</article-title>
          . URL: https://elgonsocial.wordpress.com/wp-content/uploads/2025/03/frameworksbooklet_v2.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Freire</surname>
          </string-name>
          ,
          <article-title>Pedagogy of the oppressed, in: Toward a sociology of education</article-title>
          , Routledge,
          <year>2020</year>
          , pp.
          <fpage>374</fpage>
          -
          <lpage>386</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Wagner</surname>
          </string-name>
          ,
          <article-title>Ethics as an escape from regulation. from “ethics-washing” to ethics-shopping? (</article-title>
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <article-title>UNESCO, I'd blush if i could: closing gender divides in digital skills through education</article-title>
          ,
          <year>2019</year>
          . URL: https://unesdoc.unesco.org/ark:/48223/pf0000367416, accessed on April 13,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Watch</surname>
          </string-name>
          , Automated discrimination:
          <article-title>Facebook uses gross stereotypes to optimize ad delivery</article-title>
          ,
          <year>2020</year>
          . URL: https://algorithmwatch.org/en/automated-discrimination
          <string-name>
            <surname>-</surname>
          </string-name>
          facebook-google/,
          <source>accessed on April 13</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>GRIT</surname>
          </string-name>
          , Grit - gender rights in tech,
          <year>2025</year>
          . URL: https://www.grit-gbv.
          <source>org, accessed on April 13</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Lindgren</surname>
          </string-name>
          , Simon and Tucker, Jason Edward and Dignum, Virginia,
          <source>The Swedish AI Commission's Strategic Roadmap Dodges Question Zero</source>
          ,
          <year>2024</year>
          . URL: https://wasp-hs.
          <article-title>org/ the-swedish-ai-commissions-strategic-roadmap-dodges-question-zero/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Costanza-Chock</surname>
          </string-name>
          ,
          <article-title>Design justice: Community-led practices to build the worlds we need</article-title>
          , The MIT Press,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Darzentas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Velt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wetzel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Craigon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. G.</given-names>
            <surname>Wagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Urquhart</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Benford</surname>
          </string-name>
          , Card mapper:
          <article-title>Enabling data-driven reflections on ideation cards</article-title>
          ,
          <source>in: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI '19</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . URL: https://doi.org/10.1145/3290605.3300801.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fabris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Baranowska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Dennis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Graus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hacker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Saldivar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zuiderveen Borgesius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Biega</surname>
          </string-name>
          ,
          <article-title>Fairness and bias in algorithmic hiring: A multidisciplinary survey</article-title>
          ,
          <source>ACM Transactions on Intelligent Systems and Technology</source>
          <volume>16</volume>
          (
          <year>2025</year>
          )
          <fpage>1</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>