<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>From inclusion to illusion: the pitfalls of ethicswashing in Participatory AI practices</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Riccardo Corsi</string-name>
          <email>riccardo.corsi@gssi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Beatrice Melis</string-name>
          <email>beatrice.melis@gssi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianluca De Ninno</string-name>
          <email>gianluca.deninno@gssi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tiziana Nupieri</string-name>
          <email>tiziana.nupieri@uniroma1.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Area, Gran Sasso Science Institute: L'Aquila</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Computer Science, University of Pisa: Pisa</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Department of Political Sciences, Sapienza University of Rome: Rome</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>This article addresses the increasing use of participatory approaches in the development of artificial intelligence (AI), examining their democratic potential and associated risks. While participatory AI promises to democratize technological design and enhance inclusive governance, it often falls short due to power imbalances, vague definitions, and superficial implementation - leading to what as been defined as participatory washing [ Drawing from studies on participation in political science, public policy, and technology, the paper proposes a conceptual framework to critically assess participatory practices along four dimensions: power, goals, actors, and arenas. The goal is to support practitioners in avoiding instrumental uses of participation and to foster genuinely empowering and accountable AI development processes.</p>
      </abstract>
      <kwd-group>
        <kwd>Participatory AI</kwd>
        <kwd>AI democratization</kwd>
        <kwd>participatory washing</kwd>
        <kwd>ethics washing</kwd>
        <kwd>public policy</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The rapid advancement of artificial intelligence (AI) has presented new societal opportunities and
challenges. Participatory approaches have emerged to align AI with ethical values and societal needs.
The need for increased inclusiveness in relation in AI development process stems from the significant
impact that AI-based technologies have in numerous fields, even critical ones, such as healthcare, safety,
education and employment in the labor market. Alongside, as highlighted by the European Commission
itself, there is a growing demand of participation from non-governmental and non-technical societal
actors, with the goal of creating efective and socially responsible systems [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>One of the major strengths of the call for participatory AI relies on its double promise to democratize
the design and development of AI-driven technologies and services, while being integrated as well in
democratic processes which involve participatory practices. However, despite these claimed advantages,
participatory AI and connected approaches are not free of risks, and the definitional vagueness makes
it prone to misunderstanding and misuse, the more significant one being defined as the phenomenon
of participatory-washing, where the efort to include and collaborate with externals results more in a
virtuous simulation than an actual power-sharing and genuine empowerment.</p>
      <p>
        In particular, this work aims to: (1) contribute to a theoretical understanding of participatory AI
implications, especially concerning power asymmetries and related issues; and (2) support practitioners
and developers seeking to adopt participatory approaches by helping them avoid instrumental or
superficial uses of participation. In doing so, we refer to the tradition of the analysis of public policy in
political science and political sociology, with a specific focus on participation e.g., [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ] and related
critical studies [
        <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6, 7, 8, 9</xref>
        ], as well as existing literature on participatory AI more technology centered,
e.g., [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10, 11, 12</xref>
        ]. By integrating these contributions, this article ofers an initial development of a
      </p>
      <p>CEUR</p>
      <p>
        ceur-ws.org
theoretical framework for understanding AI-related participatory practices. Rather than focusing on
empirical case studies, it adopts a more conceptual perspective aimed at clarifying the phenomenon
itself and ofering insights into how to avoid participatory washing [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The call for participation: from public policy to artificial intelligence</title>
      <sec id="sec-2-1">
        <title>2.1. Participation in public policy: does participating make me more powerful?</title>
        <p>
          Participation is a key concept in social sciences and socio-political research addressing democracy and
the relationship between citizens and institutions [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ]. Academic discourse has long examined
the reasons behind the difusion of participatory forms [
          <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
          ], their diversity [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], as well as the
potential and risks associated with their use and role [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. It is far from a univocal notion, as it
encompasses multiple dimensions. As noted, participation must be problematized and historicized to be
fully understood [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. On one hand, it refers to the capacity of individuals to shape political
decisionmaking actively and influence the quality and outcomes of public policies. In this sense, Segatori [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]
defines political participation as a set of attitudes and behaviors through which individuals seek to steer
political decisions. On the other hand, participation also serves as a normative foundation for democratic
systems, strengthening the legitimacy of both inputs and internal processes of policy-making [
          <xref ref-type="bibr" rid="ref20 ref21 ref22">20, 21, 22</xref>
          ].
Since the 1990s, Western democracies experienced a “deliberative turn” [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], fostering participatory
mechanisms supplementing traditional representative channels and promoting direct interactions in
policy-making between institutions and civil society beyond conventional interest mediation.
        </p>
        <p>
          The inclusion of civil society is seen as enhancing democracy through principles like inclusiveness,
transparency, and accountability [23, 24]. In this context, participatory governance is viewed as a
form of deliberative democracy, aimed at expanding the public sphere and making decision-making
processes more open and accountable, as the publication of the European Governance White Paper in
the EU formally recognized [25]. At the same time, “new” participatory forms have emerged, following
an isomorphic trend and spreading across various sectors — from social policies and welfare to the
environment, urban planning, such as Participatory Budgeting, Citizen Juries, and Public Debates[
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
These top-down participatory processes, led by institutional actors in local contexts, difer from the
participation processes of the 1960s and 1970s in that they are barely characterized by ideological aims
and are more focused on problem-solving for specific policy issues [
          <xref ref-type="bibr" rid="ref18 ref8 ref9">8, 9, 18</xref>
          ].
        </p>
        <p>
          This shift has been driven by assumptions that participation efectively addresses the challenges of
consensus-building, social cohesion, and collaboration [26]. These practices rely on direct engagement
between institutions and citizens, beyond traditional channels of interest representation, ideally
expanding the access to decision-making [27]. Such informed and reasoned dialogue is expected to enhance
the eficiency, legitimacy, and cooperation in public decision-making processes, particularly at the local
level [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. This idea has also been fueled by a normative convergence in both academic literature and
among governance actors, where participation is often viewed as inherently positive and capable of
producing beneficial efects on decision-making processes and the quality of democracy [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>
          But is it always the case? The simple fact of participating in a process will lead to efective
empowerment for all of us? Several criticisms have been pointed out concerning participatory practices
and they are worth noticing before focusing on the technological domain. First of all, in most cases,
these mechanisms are top-down [28], which means that stakeholders are involved mainly for providing
feedback and have zero or little influence on the actual content of decisions [
          <xref ref-type="bibr" rid="ref6 ref8">8, 6</xref>
          ]. Then, it has been
pointed out that there is considerable uncertainty about the representative capacity of participatory
practices, particularly regarding how well they reflect diverse interests within society, with legitimacy
issues and a lack of accountability of the actors involved, if compared with the elected [
          <xref ref-type="bibr" rid="ref21">29, 21</xref>
          ]. Another
relevant issue is related to the lack of social learning within participatory practices, which strive to
afect policy paradigms or guiding ideas in the policy-making [ 30, 31]. Furthermore, power asymmetry
among stakeholders, stemming from unequal resources, often leads to uneven influence in participatory
processes [32, 33, 34]. Moreover, these processes frequently prioritize corporate objectives, aligning
interests toward business rather than societal needs [35].
        </p>
        <p>
          As recognized by Edmundus and Wollenberg [36] these unequal capacities to influence decisions
perpetuate inequalities in the policymaking process. Ultimately, participatory practices risk to be aimed
at avoiding conflicts and fostering an appeasement that advantages those in power [
          <xref ref-type="bibr" rid="ref7">7, 37</xref>
          ], producing a
depoliticization of collective interests’ issues [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], thus resulting in a democratic illusion [38].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. From e-democracy to democratizing AI: the double promise of Participatory AI</title>
        <p>
          Contemporary governance innovations, such as deliberative assemblies and citizen panels, have sought
to shift towards more horizontal structures, where citizens contribute substantive input to policy
formation to advocate for openness in decision-making processes [
          <xref ref-type="bibr" rid="ref22">22, 39</xref>
          ]. It has been argued that the
principles underlying participatory AI closely mirror those pertaining to democratic governance [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ],
particularly in participatory decision-making processes within political systems [40]. Just as co-design
and participatory governance mechanisms in politics aim to involve citizens in public decision-making,
participatory AI seeks to democratize technological development by incorporating, e.g., user inputs in
technological development [41]. However, participatory AI encompasses much more than this.
        </p>
        <p>
          A key dimension of the relationship between participation and AI lies in the emergence of new forms
of engagement mediated by digital platforms and networking tools, sparking discussions regarding the
functioning and application of e-participation or digital participation [42]. Digital plataforms, virtual
networks and social media are increasingly recognized as facilitating civic participation by fostering
direct communication between citizens and administrations, thus enhancing democratic interactions
[43]. AI systems have the transformative potential to stimulate democratic participation by significantly
reducing informational and cognitive burdens on citizens, thereby fostering more informed and reflective
political engagement [44, 45]. AI systems can influence democratic processes through the aid in electoral
processes, public consultation, mass online deliberation, or participatory decision-making process
[46], facilitating also agenda-setting, opinion synthesis, and consensus-building, thereby enhancing
transparency and responsiveness of democratic institutions [47]. Through adversarial machine learning,
AI can protect citizens’ privacy from invasive political profiling, enabling individuals to deliberate
freely and independently, preserving the authenticity of their political choices [48]. Nevertheless, the
deployment of AI in democratic processes poses serious risks, including the potential of reinforcing
biases and power asymmetries inherent in algorithmic governance [49], threats to freedom of speech
and media pluralism, access to public information, truth, and essential services [50, 51, 52]. Recent
AI developments reflect a “participatory turn” [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], shifting from top-down technical designs toward
inclusive practices actively involving diverse stakeholders — developers, end-users, domain experts,
and marginalized communities [53].
        </p>
        <p>
          Such participatory methods aim to achieve continuous alignment between technological solutions
and the diverse needs, values, and preferences of those whom these technologies are designed to support
[
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. This recalls how participatory design was initially applied in technological development in the
90s, with the recognition that every societal actor can be a legitimate source of knowledge production
[54]. The need of including diferent stakeholders is recognized as a way to mitigate risks and harms, as
argued by the Global Partnership on AI: “products that are not created with an inclusive approach do not
serve all users of technology equally and in some cases they can actively harm communities, especially
those who have been historically excluded or marginalized” [55]. This aspect has gained attention
and is starting to be considered as a driver of technological innovation [56]. Engaging stakeholders
throughout the entire development lifecycle can ensure that systems reflect the lived experiences and
expectations of those who interact with them. The engagement of citizens and participatory impact
assessments are recognized as viable way to prevent risks in our datafied societies and enhance the
self-determination in the digital realm [57]. This is because the call for these participatory practices is
rooted in the idea of defending the values of democracy, human rights, and autonomy from the risks of
being eroded by the concentration of power and unaccountable AI development [58]. The founding
goal of this approach is embodied in the pursuit of democratizing AI development by ensuring that
those afected by their adoption have a stake in their shaping, even if they are non-experts [ 59].
        </p>
        <p>
          However, participation is not a one-size-fits-all solution and it could assume several diferent
meanings and describe totally diferent practices. As the contribution by the Ada Lovelace Institute [ 60]
highlighted, participatory mechanisms can be integrated according to five diferent levels, from inform,
consult, involve, collaborate, to empower - shaping a spectrum of involvement practices ranging from
engagement to deep, systemic co-creation. Other studies, e.g. the one in [61] conceptualize four levels
of participatory AI: consultation, contribution, collaboration, and co-creation. Sloane et al.[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] elucidate
an epistemological distinction of the concept, distinguishing between participation as a work, as a
consultation, and as social justice. The work by Birhane et al. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] focuses on the aims, identifying three
objectives of participatory AI: for algorithmic performance improvements, process improvements, and
collective exploration, which should be more centered on stakeholders’ needs. The existing body of
work provides valuable frameworks for understanding the diverse range of participatory practices in
AI development, but there is still dificulty in operationalizing these frameworks efectively in practice
[
          <xref ref-type="bibr" rid="ref10 ref23">10, 62, 63, 64</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Challenges and pitfalls of participatory practices and the spook of ethicswashing</title>
      <sec id="sec-3-1">
        <title>3.1. Challenges and pitfalls of participation practices AI related</title>
        <p>
          Much like in public policy, the state of participatory approaches to AI development is largely
consultative in nature, which means that while stakeholders provide input on AI system modules, they are
often not integrated as active decision-makers in the broader lifecycle of AI development [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Using
Arnstein’s“Ladder of Citizen Participation” [
          <xref ref-type="bibr" rid="ref24">65</xref>
          ], Corbett et al. [41] note participatory AI mostly remains
at consultative levels, often merely legitimizing decisions already made.
        </p>
        <p>
          Additionally, there is a real risk of reifying or amplifying existing power dynamics through
participatory AI processes. As the work by Delgado et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] highlights, while participatory mechanisms in AI
aim to include marginalized voices, there is always the danger that these eforts reinforce rather than
challenge the status quo, especially when the underlying power structures are not addressed early in
the design process.
        </p>
        <p>
          Birhane et al. [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] address another key challenge in the inclusion debate. They emphasize that while
being included in participatory processes might have practical consequences on engagement, inclusion
alone is not equivalent to participation. As they note, an individual can be part of a group but still not
actively participate, for example by not voting, writing, or acting [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. This distinction between inclusion
and true participation is crucial when discussing participatory AI, as it underlines the diference between
being invited to the table and having actual influence over decisions.
        </p>
        <p>
          Main challenges in achieving meaningful participation in AI arise from the inherent complexity
of data processing and the technical issues of AI systems. The massive amounts of data and
computational demands involved in developing AI systems, particularly Large Language Models (LLMs),
raise concerns regarding the feasibility of meaningful human participation at the scales at which these
systems operate. As Ananny and Crawford[
          <xref ref-type="bibr" rid="ref25">66</xref>
          ] discuss, the ideal of transparency applied to algorithmic
accountability may not be realistic due to the inherent complexities and black-box nature of AI systems.
These challenges complicate the notion of inclusion, as they directly impact the interpretability and
comprehensibility of AI development process and implementation.
        </p>
        <p>
          Moreover, scaling of AI systems poses a significant issue for defining useful forms of participation.
As noted, meaningful human participation may not even be feasible on the large scales at which many
AI systems operate, whether in terms of data collection across vast geographies or system deployment
in diverse contexts [
          <xref ref-type="bibr" rid="ref25">66</xref>
          ]. The operation of AI at these scales often results in “fossilized preference
models”, which may create a substantive gap between the preferences of human agents and algorithmic
decision-making, thus functioning as a technology of depoliticization [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Proxies, such as aggregated
data or simulations, are increasingly used in participatory AI contexts but carry significant risks. They
can inadequately represent marginalized voices, exacerbate power imbalances, and reduce the agency
of afected communities, thereby distorting democratic processes and sidelining authentic stakeholder
engagement [
          <xref ref-type="bibr" rid="ref1 ref10">1, 10</xref>
          ].
        </p>
        <p>
          Moreover, focusing on a social side efect of machine learning approaches which are defined as
participatory, Sloane et al. [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] coined the term “participatory washing” by referring to an extractive
and exploitative approaches of community involvement - that is specifically called ethicswashing when
it addresses ethical concerns.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Participation as a branding device for digital products?</title>
        <p>
          In fact, amid growing recognition of the widespread influence of socio-technical systems and the
mounting risks they entail, companies and developers have increasingly turned to ethics-based initiatives as a
means of signaling trustworthiness and strengthening brand legitimacy [
          <xref ref-type="bibr" rid="ref26">67</xref>
          ]. The notion of
ethicswashing — adapted from the environmental domain where it denoted superficial displays of sustainability
[
          <xref ref-type="bibr" rid="ref27">68</xref>
          ] — has become a critical lens through which these practices are analyzed. It describes the strategic
invocation of ethical discourse by private or public actors in ways that are largely performative or
semantic. A salient example is the intervention of Thomas Metzinger, a philosopher and former member
of the EU’s AI Ethics Guidelines drafting panel, who condemned the process of as “ethicswashing” and
referred to the industry’s approach as an “ethicswashing machine” [
          <xref ref-type="bibr" rid="ref28">69</xref>
          ].
        </p>
        <p>
          This form of merely rhetorical ethics is best understood as a communicative strategy — intended
not to reform practices, but to shape perception. As Freiman argues, ethicswashing functions as a
deliberate branding technique aimed at cultivating consumer confidence through the illusion of ethical
integrity [
          <xref ref-type="bibr" rid="ref29">70</xref>
          ]. Floridi terms this phenomenon ethics bluewashing, echoing its environmental counterpart
- the green - and defines it as the “malpractice of making unsubstantiated or misleading claims about,
or implementing superficial measures in favor of, the ethical values and benefits of digital processes,
products, services, or other solutions in order to appear more digitally ethical than one is” [
          <xref ref-type="bibr" rid="ref30">71</xref>
          ].
        </p>
        <p>
          Further challenges in operationalizing the ethics of socio-technical and AI systems contribute to what
is now recognized as digital ethicswashing [
          <xref ref-type="bibr" rid="ref31">72</xref>
          ]. These include the unchecked proliferation of ethical
codes and guidelines, their opportunistic use to delay or deter legislation, and the outsourcing of ethically
dubious research to jurisdictions with weaker oversight [
          <xref ref-type="bibr" rid="ref30">71</xref>
          ]. Another form of digital ethicswashing is
known as ethics-bashing: this refers to the dismissal or trivialization of ethical discourse, reducing it to
modular tools — ethics boards, self-governance frameworks, or stakeholder assemblies —stripped of their
normative force and treated instead as bureaucratic formalities [
          <xref ref-type="bibr" rid="ref32">73</xref>
          ]. At the user level, ethicswashing
erodes trust when a dissonance emerges between corporate rhetoric and practice — the proverbial gap
between the “talk” and the “walk” [
          <xref ref-type="bibr" rid="ref33">74</xref>
          ]. Importantly, ethicswashing is not confined to the private sector.
It pervades other institutional domains, including academia, the public sector, and policymaking [
          <xref ref-type="bibr" rid="ref31">72</xref>
          ].
Viewing the growing interest in inclusive and participatory methods of designing AI systems [56], it
could be used to gain a more ”ethical” appeal, if we consider also the limits of current participatory
practices, it is well sounding to claim that participation in the digital realm could be a form of digital
ethicswashing.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Assessing participation: Power, Goals, Actors, Arenas and Formats as analytical dimensions</title>
      <p>
        In this section, we turn to propose an initial theoretical framework which aim to help the conceptual
understanding of participatory AI implications, and support practitioners to avoid instrumental uses
of it. We do so by drawing on theories from public policy [
        <xref ref-type="bibr" rid="ref17 ref24 ref34 ref35 ref36 ref37 ref38 ref4">4, 17, 65, 75, 76, 77, 78, 79</xref>
        ], and AI-related
domain [
        <xref ref-type="bibr" rid="ref10">10, 41, 62</xref>
        ].
      </p>
      <p>
        For this contribution, the idea is to select a series of elements that have raised reflections on our part
about perspectives and attentions to be added to the existing analyzed landscape. According to the aim
of the present work and based on the works discussed along the paper, we propose 4 critical analytical
dimensions of participation which must be encompassed in order to avoid participatory washing.
These are the following: power, goals, actors, arenas. Two fundamental considerations have guided
our decision to focus on these four specific dimensions of participation as central to our framework.
First, AI development is inherently political [
        <xref ref-type="bibr" rid="ref39 ref40">80, 81</xref>
        ], making it crucial to critically examine power
relations embedded in participatory processes, including who decides, how influence is distributed,
and whose interests are ultimately served. Second, while many of the frameworks discussed in the
literature provide valuable tools for understanding participation, they often adress certain aspects —
especially those related to power, actors’ agency, and the deeper goals of participation — in a limited
or overly formalized way [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These elements are either insuficiently explored or treated in ways
that do not fully acknowledge their broader societal implications. For instance, there is a tendency to
treat participation as a procedural matter, rather than recognizing the socio-political dynamics at play,
such as the risk of reinforcing existing inequalities or enabling washing practices by dominant actors,
including tech companies - instead of working for the social good [
        <xref ref-type="bibr" rid="ref40 ref41">81, 82</xref>
        ].
      </p>
      <p>For this reason, we propose these four dimensions which are still under-theorized in relation to
their political substance and practical consequences, especially within the AI domain, as represented in
Figure 1. Additionally, Table1 provides a concise synthesis of these dimensions, ofering an overview
of critical reflections and key analytical questions practitioners should consider to ensure genuine
participatory practices.</p>
      <p>
        Power. A crucial aspect concerns power and has extensively being studied in relation to participation,
e.g., [
        <xref ref-type="bibr" rid="ref24 ref4">65, 4</xref>
        ]. Delgado et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] discuss power as a scale that ranges from consult (minimun level) to own
(full participation form), distinguishing between four varying degrees of decision-making authority
investing stakeholders. Participation ranges from consultative feedback without decision authority, to
ownership where stakeholders actively shape the entire AI lifecycle, a level never observed in current
practices [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. A crucial example of limited power redistribution can be found in a project involving
predictive AI for child welfare risk assessment [
        <xref ref-type="bibr" rid="ref42">83</xref>
        ]. In this case, community members such as caregivers
and child protection workers were invited to share their perspectives, but were ultimately excluded
from any decision-making about the design or deployment of the AI model. The assumption that AI
would be implemented was never questioned, and participation remained restricted to consultation on
pre-defined aspects. This underscores how stakeholders, although present, may remain marginal to
critical decisions when the power structure is not fundamentally rebalanced [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Besides how many
levels of wield power and authority one should claim, in our perspective, this power dimension must be
intended as a continuum, and as a relational issue which will be reflected also in other dimensions, from
the selection of stakeholders [
        <xref ref-type="bibr" rid="ref43">84</xref>
        ], to the roles of interaction, recruitment techniques, possibilities of
social learning - for example redefining problems besides mere solutions [
        <xref ref-type="bibr" rid="ref34 ref4">4, 75</xref>
        ].
      </p>
      <p>
        Goals. Participation has always a goal, there is always a why to be answered. It is possible to
conceptualize this by distinguishing the level of authority exercised by the participants, ranging from
the improvement of user experience, to alignment with values or preferences of people engaged, to
deliberation on specific features, and finally to engagement across the entire life cycle[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The study in
[
        <xref ref-type="bibr" rid="ref34">75</xref>
        ] allows the conceptualization of two further elements to define the aim of a participatory process.
Drawing on the classical study in [
        <xref ref-type="bibr" rid="ref35">76</xref>
        ], they argue that determining the appropriate level of participation
depends also on how structured the problem at hand is — whether it is fully structured, moderately
structured, or unstructured. Clearly, a more open problem to be solved could lead to social learning and
much more power redistribution among stakeholders, if compared to e.g., a simple request of expressing
a preference among two options. Furthermore, the outcomes of the participatory process contribute
to define its purpose [
        <xref ref-type="bibr" rid="ref34">75</xref>
        ]. Drawing on the study by [
        <xref ref-type="bibr" rid="ref37">78</xref>
        ], it becomes clear that analyzing both the
immediate results and the longer-term substantive outputs and outcomes of the participatory process
is essential to understanding how participation is intended to be enacted. A relevant example is the
WeBuildAI project [
        <xref ref-type="bibr" rid="ref44">85</xref>
        ], in which researchers engaged Pittsburgh residents in the co-construction of
preferences to be used in a public resource allocation algorithm. While the project was supposed to go
beyond an instrumental goal of performance optimization by encouraging reflection on civic values and
collective priorities, the scope of citizen influence remained limited. Participants were constrained to
selecting among predefined options, without the opportunity to shape the design of the algorithm itself.
As Delgado et al. observe, even advanced participatory frameworks like this often remain anchored
in preference elicitation rather than efective co-decision-making - even if it can be recognized as a
valuable starting point in the involvement of impacted stakeholders to express preferences towards the
goal [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Participation, as pointed out and recognized by many, is also an ”end in itself” [
        <xref ref-type="bibr" rid="ref10 ref24 ref36">77, 10, 65</xref>
        ],
and technology could even allow new forms of participation [46].
      </p>
      <p>
        Actors. As the work by Birhane et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] points out, inclusion does not necessarily imply participation,
as individuals may be included in a process but lack the agency to make meaningful contributions.
In the study in [
        <xref ref-type="bibr" rid="ref34">75</xref>
        ] the authors further divide this concept into three aspects, namely subjects, roles
and recruitment. Firstly, subjects are classified into civil society, government, and business sector,
with varying degrees of institutionalization. Civil society actors often represent marginalized or
disadvantaged communities, while government and business actors are typically more institutionalized
and have greater influence over the process - often representing a reflection of existing power imbalances.
This is why the selection phase is a critical component of the participatory process, and it is not just
about who is included, but how those actors are empowered within the process (roles). About the
recruitment issue, Kallina and Singh [62] discuss how stakeholder recruitment strategies must be
inclusive, ensuring that impacted groups especially those who are harder to reach — are not excluded.
The role of knowledge is also crucial, as the level of expertise and experience directly influences
the extent to which stakeholders can shape the outcome of AI projects. The work in [
        <xref ref-type="bibr" rid="ref45">86</xref>
        ] further
clarifies that stakeholders selection often occurs based on the knowledge and experience they possess
— that is, whether they have more or less specialized cognitive resources in the domain of interest.
A case that exemplifies the risks of tokenistic recruitment involves the participatory process in the
educational research context of human-robots interaction patterns for children, as highlighted by
Delgado et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In this project, only educators were consulted, with the assumption that they could
efectively represent children’s needs through role-playing. This kind of proxy-based participation
raises concerns about extractivism [63], where the knowledge and lived experience of the target group
are interpreted and filtered by institutional actors, potentially misrepresenting their interests [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. A
way to prevent misrepresentation is to promote digital literacy [
        <xref ref-type="bibr" rid="ref46">87</xref>
        ], with attention also to the rhythm
of the participatory practices.
      </p>
      <p>
        Arenas and formats. Delgado et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] distinguishes participatory approaches based on the degree
of involvement: some processes allow stakeholders to provide input and feedback, others facilitate
group discussions, while still others go even further, involving stakeholders in decision-making or
even allowing them to reflexively choose how to participate. Participation formats further shape
these dynamics by defining the methods and modes of interaction. As Fung [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and Gaventa[
        <xref ref-type="bibr" rid="ref38">79</xref>
        ]
explain, formats include traditional methods such as public hearings, deliberative forums, participatory
budgeting and online engagement. Accordingly, the work in [
        <xref ref-type="bibr" rid="ref34">75</xref>
        ] distinguishes between invited, created,
or closed spaces, based on their level of institutionality and the degree of access provided to stakeholders
within the arena. These types of arenas shape how collaborative and participative spaces are both
socially and physically constructed. The dynamics of participation within these spaces — that is, the
interpersonal interactions and, especially, the social performance of participants, are as well elements
worth considering. In this regard, the roles of mediators within participatory processes, an element
which is often underdeveloped in current frameworks, could be further investigated. As highlighted by
Delgado et al.[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], this level often involves professionals with expertise in user experience (UX) and
human-computer interaction (HCI), who act as mediators between the stakeholders and designers. Their
role is to ensure that input is translated into practical outcomes, bridging the gap between technical
knowledge and stakeholder needs — which is more a mediation towards the aim. However, we think
that another level of mediation should be addressed and introduced, namely a specific mediation for
the process, to ensure that all participants’ perspectives are equally valued, especially in complex
participatory processes like AI design. In our view, this level of mediation requires professionals with
expertise in sociology, psychology, anthropology, conflict resolution, and facilitation, who are trained to
manage the complex social dynamics that arise in participatory settings. An example of a more inclusive
and efective participatory arena is found in a project co-designing social robots with a small community
of adults with depression [
        <xref ref-type="bibr" rid="ref47">88</xref>
        ]. The process was structured across multiple phases: initial interviews
to gather narratives, collaborative workshops for idea generation, and final sessions for validation.
While ultimate control remained with the designers, the iterative nature and openness of the format
allowed stakeholders to deeply engage and refine their input over time. This example demonstrates how
well-designed formats can promote social learning and build proper legitimacy. Accordingly, another
crucial aspects defining arenas of interaction is time. As the study in [
        <xref ref-type="bibr" rid="ref34">75</xref>
        ] recognizes, participation
practices has rhythms - whether one-of events, longer-term engagements, or ongoing processes —
which influence the depth and quality of involvement. In AI projects, this issue is particularly critical, as
short-term consultations may not provide stakeholders with enough time to understand the complexity
of AI systems and influence their design. In our view, the long-term involvement of stakeholders not
only deepens their understanding of the technology, but also enables them to observe and react to its
integration into real-world contexts.
      </p>
      <p>Table 1 aims to provide practitioners and researchers with concrete analytical criteria to critically
assess and structure participatory AI processes, explicitly addressing common vulnerabilities linked to
participatory washing. For instance, the Power dimension urges clarification about stakeholders’ actual
influence on decision-making across the AI lifecycle, emphasizing transparency and genuine power
redistribution. The Goals dimension encourages explicitly defining participation objectives, cautioning
against vaguely stated purposes and superficial engagement. It underscores how diferent problem
structures, ranging from highly structured to open-ended issues, directly influence the nature of
participation required. Concerning Actors, the table highlights the importance of thoughtful recruitment and
representation, addressing the pitfalls of tokenism and proxy participation, thus ensuring participants’
genuine empowerment and contribution. Finally, the Arenas and Formats dimension stresses the
necessity of carefully designed participatory environments, recognizing how spatial, temporal, and
methodological configurations profoundly impact the legitimacy, depth, and efectiveness of stakeholder
engagement.</p>
      <p>Overall, the table operationalizes theoretical insights into practical guidance, promoting deeper,
reflective, and genuinely democratic participatory practices in AI development. It emphasizes that
participatory quality hinges not only on who is included, but on how power is distributed, what goals
are set, how actors are recruited and supported, and in which settings and formats deliberation unfolds.
Participation in AI should be an ongoing process of acceptance and feedback, that cannot be captured
in short-term consultations, as stakeholders’ perspectives on AI may shift as they witness its efects,
both positive and negative, over time. As acknowledged in the broader discourse, achieving genuine
Goals
Actors
Arenas and
Formats</p>
      <p>Critical Reflections
Clarify how and where participants influence
decisions across the AI lifecycle (design, deployment,
feedback); Risk of limited stakeholder agency; Need for
genuine redistribution of power and transparency
about intent
Clearly articulate the objectives of participation and
link them to the type of problem addressed
(structured vs. open-ended); Avoid vague or general goals
(e.g., ”improving inclusivity”) without structure;
Acknowledge that participation is not inherently
positive and must be critically assessed
Clarify who is involved and the rationale behind
recruitment (representation, diversity); Avoid assuming
the views of a few reflect all stakeholders; Ensure
participants have capacity and support to engage (AI
literacy, social performance); Include social mediators
(not just technical ones) to foster equity and manage
interactional dynamics
Describe arenas in practical terms (physical/digital,
open/invited); Consider participation rhythm:
oneof vs. long-term engagment; Ensure clarity on what
participants can expect to do and influence;
Recognize that efectiveness depends on continuous
engagement</p>
      <p>
        Key Analytical Questions
Who holds decision-making authority? How is power
distributed and exercised? What power dynamics are
challenged through the participatory process? To
what extent can stakeholders shape outcomes across
the lifecycle of AI systems?
What are explicit and implicit goals of participation?
Are participants co-defining objectives or selecting
predefined options? How does the structure of the
problem influence the type of participation required?
Who is included/excluded? What roles do actors
assume? How are participants recruited? Are mediators
included to balance participation and guide process
dynamics?
In what spaces does participation occur (invited,
created, closed)? What interaction formats are
employed? Are arenas anchored in realistic and
understandable terms for participants? How do time and
rhythms shape participation depth and legitimacy?
to this is the knowledge gap. This is why a fundamental requirement for empowerment is AI literacy
[
        <xref ref-type="bibr" rid="ref46">87</xref>
        ], which facilitates informed decision-making and ensures the integration of diverse perspectives
throughout the entire design process.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and future work</title>
      <p>Participatory approaches in AI ofer a double promise: democratizing both technology development
and decision-making. However, without clear frameworks and genuine power-sharing, such practices
risk becoming symbolic — falling into a specific form of digital ethicswashing: participatory
washing. To avoid this, drawing in the public policies study on participation and existing literature on
participatory processes in AI development, we proposed to conceptually articulate participation in
four key dimensions (power, goals, actors, and arenas) highlighting salient aspect in each dimension to
allow practitioners and developers to avoid ethicswashing. This structured framework allows a more
rigorous and politically-aware assessment of participatory practices, explicitly targeting vulnerabilities
associated with participatory washing. It serves both as an analytical tool for researchers and a practical
guide for practitioners.</p>
      <p>Future works requires a deeper and more exhaustive analysis of the current evolving landscape of
participatory AI processes based on case studies. Further research could empirically test the proposed
analytical framework across AI projects, explore the role of mediators in participatory processes, and
investigate long-term stakeholder engagement combining interviews with diverse stakeholders with a
more robust analysis of existing literature in the field. In this regard, future work would also benefit
from the development of more actionable and practice-oriented tools for participatory design which
could support practitioners in structuring and operationalizing participation in concrete AI development
workflows.
countability, Transparency, Inclusiveness and Openness in EU Governance, Public Administration
97 (2019) 727–740. doi:10.1111/padm.12615.
[23] M. Kaldor, The idea of global civil society, International Afairs 79 (2003) 583–593.
[24] J. A. Scholte, Civil society and democracy in global governance, Global Governance 8 (2002)
281–304.
[25] European Commission, European Governance: A White Paper, 2001. URL: https://eur-lex.europa.</p>
      <p>eu/legal-content/EN/TXT/?uri=LEGISSUM%3Al10109, accessed: 2025-04-11.
[26] P. Savoldi, Giochi di partecipazione. Forme territoriali di azione collettiva, in: Giochi di
partecipazione, Franco Angeli, 2006, pp. 5–176.
[27] E. D’Albergo, G. Moini, Pratiche partecipative e politiche pubbliche: studi di caso a Roma, Rivista
delle politiche sociali (2006) 365–385.
[28] V. Laino, Co-progettazione e processi partecipativi in Regione Toscana: ricognizione e
toolkit per l’analisi e la valutazione, Master’s thesis, Sapienza Università di Roma, Facoltà di
Scienze Politiche, Sociologia, Comunicazione, 2023. URL: https://giovanisi.it/app/uploads/2023/07/
LAINO-VITTORIA_TESI-1.pdf.
[29] J. Greenwood, Organized Civil Society and Democratic Legitimacy in the European Union, British</p>
      <p>Journal of Political Science 37 (2007) 333–357.
[30] H. C. Jenkins-Smith, Analytical Debates and Policy Learning: Analysis and Change in the Federal</p>
      <p>Bureaucracy, Policy Sciences 21 (1988) 169–211.
[31] P. A. Sabatier, An Advocacy Coalition Framework of Policy Change and the Role of Policy-Oriented</p>
      <p>Learning Therein, Policy Sciences 21 (1988) 129–168.
[32] E. Wollenberg, J. Anderson, D. Edmunds, Pluralism and the Less Powerful: Accommodating
Multiple Interests in Local Forest Management, International Journal of Agricultural Resources,
Governance and Ecology 1 (2001) 199–222. URL: https://doi.org/10.1504/IJARGE.2001.000013.
doi:10.1504/IJARGE.2001.000013.
[33] N. McKeon, Are Equity and Sustainability a Likely Outcome When Foxes and Chickens Share the
Same Coop? Critiquing the Concept of Multistakeholder Governance of Food Security,
Globalizations 14 (2017) 379–398.
[34] M. Fougère, N. Solitander, Dissent in Consensusland: An Agonistic Problematization of
Multi</p>
      <p>Stakeholder Governance, Journal of Business Ethics 164 (2020) 683–699.
[35] J. Rolof, Learning from Multi-Stakeholder Networks: Issue-Focussed Stakeholder Management,</p>
      <p>Journal of Business Ethics 82 (2008) 233–250.
[36] D. Edmunds, E. Wollenberg, A Strategic Approach to Multistakeholder Negotiations, Development
and Change 32 (2001) 231–253.
[37] L. Pellizzoni, Bridging Promises and (Dis)illusions: Deliberative Democracy in an Evolutionary
Perspective, in: R. Beunen, K. Van Assche, M. Duineveld (Eds.), Evolutionary Governance Theory,
Springer International Publishing, Dordrecht, 2015, pp. 215–232.
[38] M. Johnson, J. M. Bradshaw, P. J. Feltovich, C. M. Jonker, M. B. van Riemsdijk, M. Sierhuis, Coactive
design: designing support for interdependence in joint activity, J. Hum.-Robot Interact. 3 (2014)
43–69. URL: https://doi.org/10.5898/JHRI.3.1.Johnson. doi:10.5898/JHRI.3.1.Johnson.
[39] E. Ostrom, Crossing the Great Divide: Coproduction, Synergy and Development, World
Development 24 (1996) 1073–1087. URL: https://www.sciencedirect.com/science/article/abs/pii/
0305750X9600023X.
[40] L. Bherer, P. Dufour, F. Montambeault, The participatory democracy turn: an introduction, Journal
of Civil Society 12 (2016) 225–230. doi:10.1080/17448689.2016.1216383.
[41] E. Corbett, R. Denton, S. Erete, Power and Public Participation in AI, in: Equity and Access in
Algorithms, Mechanisms, and Optimization (EAAMO ’23), ACM, 2023. doi:10.1145/3617694.
3623228.
[42] M. Steinbach, J. Sieweke, S. Süß, The Difusion of E-Participation in Public Administrations: A
Systematic Literature Review, Journal of Organizational Computing and Electronic Commerce 29
(2019) 61–95. URL: https://doi.org/10.1080/10919392.2019.1606849. doi:10.1080/10919392.2019.
1606849.
[43] United Nations, United Nations E-Government Survey 2020: Digital Government in the Decade
of Action and Sustainable Development, 2020. URL: https://publicadministration.un.org/egovkb/
en-us/Reports/UN-E-Government-Survey-2020, accessed: 2025-04-10.
[44] M. Deseriis, Rethinking the digital democratic afordance and its impact on political
representation: Toward a new framework, New Media &amp; Society 23 (2020) 2452–2473. doi:10.1177/
1461444820929678, original work published 2021.
[45] M. Deseriis, Reducing the Burden of Decision in Digital Democracy Applications: A Comparative
Analysis of Six Decision-making Software, Science, Technology, &amp; Human Values 48 (2021)
401–427. doi:10.1177/01622439211054081, original work published 2023.
[46] H. Landemore, Fostering More Inclusive Democracy with AI, Finance &amp; Development Magazine,
International Monetary Fund, 2023. URL: https://www.imf.org/en/Publications/fandd/issues/2023/
12/Fostering-more-inclusive-democracy-with-AI-Landemore, accessed: 2025-04-11.
[47] B. Schneier, et al., How Artificial Intelligence can Aid Democracy, Slate, 2023. URL: https://slate.</p>
      <p>com/technology/2023/04/ai-public-option.html, april 21, 2023.
[48] S. F. Auliya, O. Kudina, A. Y. Ding, et al., AI versus AI for Democracy: Exploring the Potential of
Adversarial Machine Learning to Enhance Privacy and Deliberative Decision-Making in Elections,
AI Ethics (2024). doi:10.1007/s43681- 024- 00588- 2, online first.
[49] M. Coeckelbergh, H. S. Saetra, Climate Change and the Political Pathways of AI: The
TechnocracyDemocracy Dilemma in Light of Artificial Intelligence and Human Agency, Technology in Society
75 (2023) 102406. doi:10.1016/j.techsoc.2023.102406.
[50] A. Jungherr, Artificial Intelligence and Democracy: A Conceptual Framework, Social Media +</p>
      <p>Society (2023). doi:10.1177/20563051231186353, first published online July 16, 2023.
[51] M. Coeckelbergh, Democracy as Communication: Towards a Normative Framework for Evaluating</p>
      <p>Digital Technologies, Contemporary Pragmatism 21 (2024) 217–235.
[52] M. Coeckelbergh, LLMs, Truth, and Democracy: An Overview of Risks, Science and Engineering</p>
      <p>Ethics 31 (2025). doi:10.1007/s11948- 025- 00529- 0.
[53] S. Hossain, S. I. Ahmed, Towards a New Participatory Approach for Designing Artificial Intelligence
and Data-Driven Technologies, 2021. URL: https://arxiv.org/abs/2104.04072. arXiv:2104.04072.
[54] D. Schuler, A. Namioka (Eds.), Participatory Design: Principles and Practices, 1st ed., CRC Press,
1993. doi:10.1201/9780203744338.
[55] T. Khan, T. Park, AI Needs Inclusive Stakeholder Engagement Now More Than Ever, https:
//partnershiponai.org/ai-needs-inclusive-stakeholder-engagement-now-more-than-ever/, 2024.</p>
      <p>Accessed: 2025-04-11.
[56] M. De Sanctis, P. Inverardi, P. Pelliccione, Do Modern Systems Require New Quality Dimensions?,
in: A. Bertolino, J. Pascoal Faria, P. Lago, L. Semini (Eds.), Quality of Information and
Communications Technology. QUATIC 2024, volume 2178 of Communications in Computer and Information
Science, Springer, Cham, 2024. doi:10.1007/978- 3- 031- 70245- 7_6.
[57] S. G. Verhulst, Computational Social Science for the Public Good: Towards a taxonomy of
governance and policy challenges, ????
[58] T. Davies, A. Colom, L. Velkova, M. Poblet, L. Nobbs, L. Kuneva,
Participatory AI: Forging Shared Frameworks for Action, https://www.techpolicy.press/
participatory-ai-forging-shared-frameworks-for-action/, 2025. Tech Policy Press.
[59] J. Simonsen, T. Robertson, Routledge International Handbook of Participatory Design, Routledge,
2012.
[60] Ada Lovelace Institute, Participatory Data Stewardship, 2021. URL: https://www.</p>
      <p>adalovelaceinstitute.org/report/participatory-data-stewardship/.
[61] A. Berditchevskaia, K. Peach, E. Malliaraki, Participatory AI for Humanitarian Innovation: A</p>
      <p>Briefing Paper, Technical Report, Nesta, London, 2021.
[62] E. Kallina, J. Singh, Stakeholder Involvement for Responsible AI Development: A Process
Framework, in: Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’24), ACM,
2024. doi:10.1145/3689904.3694698.
[63] M. Sloane, Controversies, contradiction, and “participation” in AI, Big Data &amp; Society 11 (2024)</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have employed Chat-GPT4omni in order to: Sentence Polishing.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Birhane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Isaac</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Prabhakaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Diaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Elish</surname>
          </string-name>
          , I. Gabriel, S. Mohamed,
          <article-title>Power to the People? Opportunities and Challenges for Participatory AI</article-title>
          ,
          <source>in: Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms</source>
          , Mechanisms, and Optimization, EAAMO '22,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2022</year>
          . URL: https://doi.org/10.1145/ 3551624.3555290. doi:
          <volume>10</volume>
          .1145/3551624.3555290.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sloane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Moss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Awomolo</surname>
          </string-name>
          , L. Forlano,
          <article-title>Participation is not a Design Fix for Machine Learning</article-title>
          ,
          <source>in: Proceedings of the 37th International Conference on Machine Learning (ICML)</source>
          , volume
          <volume>119</volume>
          <source>of Proceedings of Machine Learning Research</source>
          , PMLR, Vienna, Austria,
          <year>2020</year>
          .
          <article-title>Copyright 2020 by the author(s).</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission.</surname>
          </string-name>
          Directorate-General for Research and Innovation,
          <source>Taking European Knowledge Society Seriously</source>
          ,
          <year>2007</year>
          . URL: https://op.europa.eu/en/publication-detail/-/publication/ 5d0e77c7-2948
          <string-name>
            <surname>-</surname>
          </string-name>
          4ef5
          <string-name>
            <surname>-</surname>
          </string-name>
          aec7-bd18efe3c442/language-en,
          <source>retrieved April 11</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fung</surname>
          </string-name>
          ,
          <article-title>Varieties of participation in complex governance</article-title>
          ,
          <source>Public Administration Review</source>
          <volume>66</volume>
          (
          <year>2006</year>
          )
          <fpage>66</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Raniolo</surname>
          </string-name>
          ,
          <article-title>La partecipazione politica</article-title>
          . Fare, pensare, essere, Il Mulino, Bologna,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sintomer</surname>
          </string-name>
          , J. De Maillard,
          <article-title>The Limits to Local Participation and Deliberation in the French «politique de la ville»</article-title>
          ,
          <source>European Journal of Political Research</source>
          <volume>46</volume>
          (
          <year>2007</year>
          )
          <fpage>503</fpage>
          -
          <lpage>529</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gourgues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Topçu</surname>
          </string-name>
          ,
          <source>Gouvernementalité et Participation. Lectures Critiques, Participations</source>
          <volume>2</volume>
          (
          <year>2013</year>
          )
          <fpage>7</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Moini</surname>
          </string-name>
          ,
          <article-title>Teoria critica della partecipazione. Un approccio sociologico</article-title>
          ,
          <source>Franco Angeli, Milano</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Moini</surname>
          </string-name>
          , Participation, Neoliberalism and Depoliticisation of Public Action, SocietàMutamentoPolitica
          <volume>8</volume>
          (
          <year>2017</year>
          )
          <fpage>129</fpage>
          -
          <lpage>145</lpage>
          . URL: https://doi.org/10.13128/SMP-20853. doi:
          <volume>10</volume>
          .13128/SMP- 20853.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Delgado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Madaio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice</article-title>
          ,
          <source>in: Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms</source>
          , Mechanisms, and Optimization, EAAMO '23,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2023</year>
          . URL: https://doi.org/10.1145/3617694.3623261. doi:
          <volume>10</volume>
          .1145/3617694.3623261.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Stimson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Raper</surname>
          </string-name>
          ,
          <string-name>
            <surname>Participatory</surname>
            <given-names>AI</given-names>
          </string-name>
          :
          <article-title>A Method for Integrating Inclusive and Ethical Design Considerations into Autonomous System Development</article-title>
          , in: M.
          <string-name>
            <surname>Huda</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
          </string-name>
          , T. Kalganova (Eds.),
          <source>Towards Autonomous Robotic Systems. TAROS</source>
          <year>2024</year>
          , volume
          <volume>15051</volume>
          of Lecture Notes in Computer Science, Springer, Cham,
          <year>2025</year>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>031</fpage>
          - 72059- 8_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Coeckelbergh</surname>
          </string-name>
          ,
          <source>AI Ethics</source>
          , The MIT Press Essential Knowledge Series, MIT Press, Cambridge, Massachusetts,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>F. De Nardis</surname>
          </string-name>
          ,
          <article-title>Sociologia politica</article-title>
          .
          <article-title>Per comprendere i fenomeni politici contemporanei</article-title>
          , nuova edizione ed.,
          <string-name>
            <surname>McGraw-Hill</surname>
            <given-names>Education</given-names>
          </string-name>
          , Milano,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>F.</given-names>
            <surname>Raniolo</surname>
          </string-name>
          , La partecipazione politica, Il Mulino, Bologna,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>D.</given-names>
            <surname>Della</surname>
          </string-name>
          <string-name>
            <surname>Porta</surname>
          </string-name>
          ,
          <article-title>La partecipazione nelle istituzioni: concettualizzare gli esperimenti di democrazia deliberativa</article-title>
          ,
          <source>Partecipazione e Conflitto</source>
          (
          <year>2008</year>
          )
          <fpage>15</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R. A. W.</given-names>
            <surname>Rhodes</surname>
          </string-name>
          , Understanding Governance:
          <article-title>Policy Networks, Reflexivity</article-title>
          and Accountability, Open University Press, Buckingham,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. O.</given-names>
            <surname>Wright</surname>
          </string-name>
          , Deepening Democracy,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>G.</given-names>
            <surname>Moini</surname>
          </string-name>
          , T. Nupieri,
          <article-title>Forme emergenti e “nuove” pratiche della partecipazione, in: Sociologia della politica contemporanea</article-title>
          ,
          <source>Carocci</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>269</fpage>
          -
          <lpage>279</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>R.</given-names>
            <surname>Segatori</surname>
          </string-name>
          ,
          <article-title>Sociologia dei fenomeni politici</article-title>
          , Laterza, Bari-Roma,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>F. W.</given-names>
            <surname>Scharpf</surname>
          </string-name>
          , Governing in Europe: Efective and Democratic?, Oxford University Press, Oxford,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <article-title>Democracy and legitimacy in the european union revisited: Input, output</article-title>
          and 'throughput',
          <source>Political Studies</source>
          <volume>61</volume>
          (
          <year>2013</year>
          )
          <fpage>2</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>V.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wood</surname>
          </string-name>
          , Conceptualizing Throughput Legitimacy:
          <article-title>Procedural Mechanisms of Ac20539517241235862</article-title>
          . doi:
          <volume>10</volume>
          .1177/20539517241235862, original work published
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [64]
          <string-name>
            <given-names>A.</given-names>
            <surname>Parthasarathy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Phalnikar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jauhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Somayajula</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. S.</given-names>
            <surname>Krishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ravindran</surname>
          </string-name>
          , Participatory Approaches in
          <source>AI Development and Governance: A Principled Approach</source>
          ,
          <year>2024</year>
          . URL: https://arxiv. org/abs/2407.13100. arXiv:
          <volume>2407</volume>
          .
          <fpage>13100</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [65]
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Arnstein</surname>
          </string-name>
          ,
          <article-title>A Ladder of Citizen Participation</article-title>
          ,
          <source>Journal of the American Institute of Planners</source>
          <volume>35</volume>
          (
          <year>1969</year>
          )
          <fpage>216</fpage>
          -
          <lpage>224</lpage>
          . doi:
          <volume>10</volume>
          .1080/01944366908977225.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [66]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ananny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Crawford</surname>
          </string-name>
          , Seeing Without Knowing:
          <article-title>Limitations of the Transparency Ideal</article-title>
          and Its Application to Algorithmic Accountability,
          <source>New Media &amp; Society</source>
          <volume>20</volume>
          (
          <year>2018</year>
          )
          <fpage>973</fpage>
          -
          <lpage>989</lpage>
          . URL: https://doi.org/10.1177/1461444816676645. doi:
          <volume>10</volume>
          .1177/1461444816676645.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [67]
          <string-name>
            <given-names>B.</given-names>
            <surname>Attard-Frost</surname>
          </string-name>
          , A. De Los Rìos,
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Walters</surname>
          </string-name>
          ,
          <article-title>The ethics of AI Business Practices: a Review of 47 AI Ethics Guidelines</article-title>
          ,
          <source>AI and Ethics</source>
          <volume>3</volume>
          (
          <year>2022</year>
          )
          <fpage>389</fpage>
          -
          <lpage>406</lpage>
          . doi:https://doi.org/10.1007/ s43681-022-00156-6.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [68]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bowen</surname>
          </string-name>
          , After Greenwashing:
          <article-title>Symbolic Corporate Environmentalism and Society, Organizations and the Natural Environment</article-title>
          , Cambridge University Press,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [69]
          <string-name>
            <given-names>T.</given-names>
            <surname>Metzinger</surname>
          </string-name>
          , EU Guidelines: Ethics Washing Made in Europe, https://www.tagesspiegel.de/politik/ eu
          <article-title>-guidelines-ethics-washing-made-in-europe/24195496</article-title>
          .html,
          <year>2019</year>
          . Accessed April
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [70]
          <string-name>
            <given-names>O.</given-names>
            <surname>Freiman</surname>
          </string-name>
          ,
          <article-title>Making sense of the conceptual nonsense 'trustwhorty AI'</article-title>
          ,
          <source>AI and Ethics</source>
          <volume>3</volume>
          (
          <year>2023</year>
          )
          <fpage>1351</fpage>
          -
          <lpage>1360</lpage>
          . doi:https://doi.org/10.1007/s43681-022-00241-w.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [71]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <source>Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical, Philosophy and Technology</source>
          <volume>32</volume>
          (
          <year>2019</year>
          )
          <fpage>185</fpage>
          -
          <lpage>194</lpage>
          . doi:https://doi.org/10.1007/ s13347-019-00354-x.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [72]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schultz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. G.</given-names>
            <surname>Conti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Seele</surname>
          </string-name>
          , Digital Ethicswashing:
          <article-title>a Systemic Review and a ProcessPerception-Framework, AI and Ethics (</article-title>
          <year>2024</year>
          )
          <fpage>169</fpage>
          -
          <lpage>177</lpage>
          . doi:
          <volume>10</volume>
          .1007/s43681-024-00430-9.
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [73]
          <string-name>
            <given-names>E.</given-names>
            <surname>Bietti</surname>
          </string-name>
          ,
          <article-title>From Ethics Washing to Ethics Bashing: a View on Tech Ethics from within Moral Philosophy</article-title>
          ,
          <source>in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2020</year>
          . doi:https://doi.org/10.1145/3351095.3372860.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [74]
          <string-name>
            <given-names>E.</given-names>
            <surname>Keymolen</surname>
          </string-name>
          ,
          <article-title>Trustworthy tech companies: talking the talk or waling the walk?</article-title>
          ,
          <source>AI and Ethics</source>
          <volume>4</volume>
          (
          <year>2024</year>
          )
          <fpage>169</fpage>
          -
          <lpage>177</lpage>
          . doi:https://doi.org/10.1007/s43681-022-00254-5.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [75]
          <string-name>
            <given-names>K.</given-names>
            <surname>Hofer</surname>
          </string-name>
          , D. Kaufmann, Actors, arenas
          <article-title>and aims: A conceptual framework for public participation</article-title>
          ,
          <source>Planning Theory</source>
          <volume>22</volume>
          (
          <year>2022</year>
          )
          <fpage>357</fpage>
          -
          <lpage>379</lpage>
          . URL: https://api.semanticscholar.org/CorpusID:253533431.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [76]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hisschemöller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hoppe</surname>
          </string-name>
          ,
          <article-title>Coping with Intractable Controversies: The Case for Problem Structuring in Policy Design and Analysis</article-title>
          ,
          <source>Knowledge and Policy</source>
          <volume>8</volume>
          (
          <year>1995</year>
          )
          <fpage>40</fpage>
          -
          <lpage>60</lpage>
          . URL: https: //doi.org/10.1007/BF02832229. doi:
          <volume>10</volume>
          .1007/BF02832229.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [77]
          <string-name>
            <given-names>N.</given-names>
            <surname>Nelson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wright</surname>
          </string-name>
          , Participation and Power, in: N.
          <string-name>
            <surname>Nelson</surname>
          </string-name>
          , S. Wright (Eds.),
          <source>Power and Participatory Development: Theory and Practice</source>
          , Intermediate Technology Publications, London,
          <year>1995</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [78]
          <string-name>
            <given-names>J.</given-names>
            <surname>Newig</surname>
          </string-name>
          ,
          <article-title>Does Public Participation in Environmental Decisions Lead to Improved Environmental Quality? Towards an Analytical Framework</article-title>
          , Communication, Cooperation, Participation (
          <source>International Journal of Sustainability Communication)</source>
          <volume>1</volume>
          (
          <year>2007</year>
          )
          <fpage>51</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [79]
          <string-name>
            <given-names>J.</given-names>
            <surname>Gaventa</surname>
          </string-name>
          ,
          <article-title>Finding the Spaces for Change: A Power Analysis</article-title>
          ,
          <source>IDS Bulletin 37</source>
          (
          <year>2006</year>
          )
          <fpage>23</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [80]
          <string-name>
            <given-names>K.</given-names>
            <surname>Crawford</surname>
          </string-name>
          , Atlas of AI: Power, Politics, and
          <source>the Planetary Costs of Artificial Intelligence</source>
          , Yale University Press, New Haven, CT,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [81]
          <string-name>
            <surname>C. O'Neil</surname>
          </string-name>
          , Weapons of Math Destruction, Penguin Books, Harlow, England,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [82]
          <string-name>
            <given-names>A.</given-names>
            <surname>Malizia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          ,
          <article-title>Why Is the Current XAI Not Meeting the Expectations?</article-title>
          ,
          <source>Commun. ACM</source>
          <volume>66</volume>
          (
          <year>2023</year>
          )
          <fpage>20</fpage>
          -
          <lpage>23</lpage>
          . URL: https://doi.org/10.1145/3588313. doi:
          <volume>10</volume>
          .1145/3588313.
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [83]
          <string-name>
            <surname>H.-F. Cheng</surname>
            , L. Stapleton,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Bullock</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Chouldechova</surname>
            ,
            <given-names>Z. S.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Zhu</surname>
          </string-name>
          , Soliciting Stakeholders'
          <article-title>Fairness Notions in Child Maltreatment Predictive Systems</article-title>
          ,
          <source>in: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          . doi:
          <volume>10</volume>
          . 1145/3411764.3445665.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [84]
          <string-name>
            <given-names>D.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hayes</surname>
          </string-name>
          , Ethnographic and Participatory Methods in Information Systems Research: Accessing Marginalized Communities,
          <source>International Journal of Human-Computer Interaction</source>
          <volume>30</volume>
          (
          <year>2014</year>
          )
          <fpage>393</fpage>
          -
          <lpage>408</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          [85]
          <string-name>
            <surname>M. K. Lee</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Kusbit</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kahng</surname>
            ,
            <given-names>J. T.</given-names>
          </string-name>
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <string-name>
            <surname>Yuan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Chan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>See</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Noothigattu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Psomas</surname>
            ,
            <given-names>A. D.</given-names>
          </string-name>
          <string-name>
            <surname>Procaccia</surname>
          </string-name>
          , E. Y. Huang,
          <article-title>WeBuildAI: Participatory Framework for Algorithmic Governance</article-title>
          ,
          <source>Proceedings of the ACM on Human-Computer Interaction</source>
          <volume>3</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>35</lpage>
          . doi:
          <volume>10</volume>
          .1145/3359283.
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          [86]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fung</surname>
          </string-name>
          ,
          <article-title>Putting the Public Back into Governance: The Challenges of Citizen Participation</article-title>
          and
          <string-name>
            <given-names>Its</given-names>
            <surname>Future</surname>
          </string-name>
          ,
          <source>Public Administration Review</source>
          <volume>75</volume>
          (
          <year>2015</year>
          )
          <fpage>513</fpage>
          -
          <lpage>522</lpage>
          . URL: https://doi.org/10.1111/puar.12361. doi:
          <volume>10</volume>
          .1111/puar.12361.
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          [87]
          <string-name>
            <surname>D. T. K. Ng</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. K. L. Leung</surname>
            , S. K.-W. Chu,
            <given-names>M. S.</given-names>
          </string-name>
          <string-name>
            <surname>Qiao</surname>
          </string-name>
          ,
          <article-title>Conceptualizing AI literacy: An exploratory review</article-title>
          ,
          <source>Comput. Educ. Artif. Intell</source>
          .
          <volume>2</volume>
          (
          <year>2021</year>
          )
          <article-title>100041</article-title>
          . URL: https://api.semanticscholar.org/CorpusID: 244514711.
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          [88]
          <string-name>
            <given-names>H. R.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Šabanović</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.-L.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nagata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Piatt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hakken</surname>
          </string-name>
          ,
          <article-title>Steps toward participatory design of social robots: mutual learning with older adults with depression</article-title>
          ,
          <source>in: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI '17)</source>
          , ACM,
          <year>2017</year>
          , pp.
          <fpage>244</fpage>
          -
          <lpage>253</lpage>
          . doi:
          <volume>10</volume>
          .1145/2909824.3020237.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>