<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Afordances and
Constraints Outlining the Implementation of Artificial Intelligence in Public Sector Organiza-
tions. International Journal of Information Management 73 (2023). https://doi.org/10.1016/j.ijin</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>People, Processes, and Innovation Progress: Understanding Organizational Drivers of AI Adoption in the Public Sector</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Saurav</string-name>
          <email>saurav@student.kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stanislav Mahula</string-name>
          <email>stanislav.mahula@kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joep Crompvoets</string-name>
          <email>joep.crompvoets@kuleuven.be</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Artificial Intelligence, Public Sector, Digital Transformation, Innovation Teams</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Social Sciences (FSW), KU Leuven</institution>
          ,
          <addr-line>3000 Leuven</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Proceedings EGOV-CeDEM-ePart conference</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <issue>03</issue>
      <fpage>94</fpage>
      <lpage>100</lpage>
      <abstract>
        <p>This paper explores organizational factors such as culture, structure, and personal behaviors that influence the successful adoption of artificial intelligence (AI) in public sector organizations in the EU. While AI ofers the potential for enhanced eficiency and service quality, its adoption remains constrained by barriers like employee resistance, rigid bureaucratic structures, and cultural inertia. Using the Technology-Organization-Environment (TOE) framework, the research identifies AI adoption challenges and enablers. The research reveals that peoplecentricity is the primary facilitator in adopting AI. This can be achieved by building trust, multidisciplinary teams, and sustainable implementation strategies.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>(J. Crompvoets)
CEUR
Workshop</p>
      <p>ISSN1613-0073
often operate with rigid structures and formal procedures characterized by lengthy mechanisms and
hierarchical control. Contrariwise, technological advancements like AI require flexibility and often
introduce disruption and uncertainty [18]. This dynamic influences implementation at procedural,
cultural, and individual levels, with the extent of impact varying based on the size and structure of
the organization [8]. Despite numerous studies addressing the ethical and technical aspects of these
challenges, a compelling area of exploration remains in the form of understanding AI adoption within
the organizational context of public sector entities. Analyzing how organizational factors either facilitate
or inhibit such adoption across diferent levels can ofer recommendations for efectively integrating
dynamic innovations, particularly within the rigid bureaucratic systems prevalent in the public sector
[18]. Therefore, this study bases its foundation on the following Primary Research Question:
“What cultural, structural, and individual factors within public sector organizations shape the successful
adoption of artificial intelligence?”</p>
      <p>To further understand the impact of an innovative transformational environment and then to construct
a theoretical lens to concretize the theoretical framework model, this article explores the most recent
and relevant niche (2020-2024) literature in this field to identify the abovementioned factors. This
paper is structured as follows: Building on findings from the literature (Section 2), the study integrates
the Technology-Organization-Environment (TOE) framework with a newly developed CSI
(CultureStructure-Individual) model for a more granular understanding of organizations. Methodologically
(Section 3), the paper uses qualitative interviews with public sector experts to assess these factors in
practice. The findings are then analyzed thematically in the results section (Section 4), leading to a
discussion (Section 5) on the implications for sustainable AI adoption and a conclusion (Section 6) with
key insights, limitations, and future research directions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Organizational Influencers: Literature Review</title>
      <p>Understanding organizational readiness through cultural adaptation and structural barriers is crucial in
the public sector, where rigid structures and cultural inertia prevail. With this literature review, we
synthesize existing studies to identify key organizational drivers and hindrances to AI integration in
public organizations.</p>
      <sec id="sec-2-1">
        <title>2.1. Cultural Barriers and Change Management in AI Adoption</title>
        <p>An organization’s AI readiness depends on the right culture (innovation, risk-taking, and
experimentation), context (e-government initiatives and strategic tech alignment), and institutional orientation (New
Public Management). Cross-agency collaboration, transformational leadership, and institutionalized
learning also significantly impact adoption eforts [10]. For both incremental and transformational
changes, the presence of an innovative and open culture is a crucial determinant [8]. </p>
        <p>Contrariwise, cultural challenges such as a lack of innovation spirit, insuficient skills, aversion to
risktaking, and dificulties with AI procurement can become significant obstacles if not mitigated [18]. The
integration of AI into organizations often leads to unintended cultural and structural consequences. For
example, in a Dutch case of fraud detection, automated processes replaced human intervention, resulting
in 1/3rd of staf being laid of and a shift from traditional bureaucratic roles to system-monitoring
functions. This transition limited professionals’ discretionary freedom, reducing their ability to provide
feedback on the AI system’s broader implications. </p>
        <p>
          The failure to align AI adoption with strategic cultural changes afected trust, adaptability, and
the efective evolution of bureaucratic roles adversely [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. Consequently, risk-averse behavior, often
prevalent in publicly funded organizations, poses another greater barrier to AI adoption due to the novel
and uncertain nature of AI projects, which associates them with such perceived risks. Implementing
AI, like other disruptive technologies, requires change management strategies and skills training to
mitigate these risks and reconfigure organizational structures efectively [10, 16].
        </p>
        <p>
          An innovation culture that tolerates some failure and employs agile project management methods
tends to support AI adoption. Despite eforts to build trust in AI systems, particularly for business-critical
decisions, researchers emphasize that while AI is efective for repetitive tasks, it lacks the understanding
and knowledge inherent in human interactions. Public Managers consistently stress that the ”human
touch” remains indispensable in workplaces, particularly in roles that involve managing people. And
therefore, culturally, balancing technological advancements with human-centric approaches becomes
paramount [
          <xref ref-type="bibr" rid="ref2">2, 16</xref>
          ].
        </p>
        <p>
          Employee resistance to change is inevitable but relatively manageable. Resistance can be mitigated
through better communication and collaboration among employees, education, and engagement
strategies. Transparent communication is essential to address concerns about shifts in power dynamics and
to convey the potential benefits of transformation for individuals and groups within the organization
[
          <xref ref-type="bibr" rid="ref1 ref6">1, 6</xref>
          ]. Senior management or Decision-makers not only encourage technological adoption and digital
transformation but also drive cultural change by recruiting younger generations to balance and shift
the mindset of current employees. Leadership and financial support combined with a data-driven
culture bring a successful transition [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Investing in social ties between departments and building
cross-functional expertise helps build responsiveness and adaptability that makes the organization
change-ready. Change management programs that address employee’s legitimate concerns and promote
continuous learning tend to evolve the cultural mindset within the institution. By integrating AI
systems into daily tasks and providing platforms for data storage and sharing, organizations can ensure
a smoother transition while maintaining trust and collaboration across all levels [
          <xref ref-type="bibr" rid="ref6">6, 18</xref>
          ]. This culture is
materialized and reinforced through structures and processes followed within the system, and therefore,
structural barriers become prominent factors of influence.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Structural and Procedural Barriers to AI Adoption</title>
        <p>
          Despite a few successful AI adoptions in government, changes in organizational structures and
administrative processes remain unclear. Efective AI innovation requires clear transformation to create
value [15]. AI adoption is context-specific and embedded in structures, processes, and routines. Case
studies show AI’s technical complexity demands alignment with existing systems and complementary,
signature structures [18]. Structural barriers require changes in development, operations, support, and
legal frameworks. Employee concerns, though mitigated by communication and upskilling, persist [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
Rigid structures, formal processes, and control needs pose unique public sector challenges. The tension
between these and flexibility limits adoption. Balancing experimentation and identity is essential [18].
        </p>
        <p>
          A structured environment that enables employees to collaborate with machines through clear task
divisions rather than replacing humans or forcing them to adapt to machine directives helps overcome
these barriers to an extent [11]. For instance, in the Netherlands, AI was initially treated as a technical
analytics upgrade rather than a strategic transformation. Human oversight was later mandated to
ensure ethical and accurate decision-making. This exhibits the need for an organizational approach
that integrates technical and strategic considerations [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], which also sheds light on ethical and
normative considerations. These considerations are more salient for AI innovations than for traditional IT
implementations. Successful adoption demands the integration of technical, operational, managerial,
ethical, and legal expertise into new organizational structures and processes [18]. 
        </p>
        <p>Investments in intangible organizational capital are equally critical. Beyond training in digital
skills, organizations need to redesign their processes, including hiring, performance evaluations, and
reward systems, as well as broader initiatives such as business process reengineering and organizational
redesign [17]. An organization’s culture and structure build a foundation for its people to innovate,
even though resistance to change might emerge.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Individual Barriers: Employee Resistance in AI Adoption</title>
        <p>
          Employees’ attitudes and mindsets are significant challenges in adopting AI. Felemban et al (2024)
showed behavior and attitudes determine organizational readiness. Resistance is fueled by lack of
understanding, cognitive misperceptions, and preference for the status quo. Skepticism about AI’s
reliability and fairness afects trust [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Employee skills are critical for digitally-induced change. Digital
and cross-functional skills enable integration of new technologies, especially for incremental changes
[8]. In the public sector, lack of AI expertise is a barrier; targeted training is needed to design and
implement AI solutions [12].  More so, Job security concerns are a major source of resistance. Employees
fear that AI might lead to job losses, diminished managerial authority, and the downgrading of roles.
This anxiety reflects a strong bias toward maintaining familiar work (status quo inertia) environments
[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. 
        </p>
        <p>Resistance also stems from routine disruption, lack of explainability, and unclear task prioritization
[16]. Perceived usefulness and task routineness influence acceptance. Adoption requires rethinking task
divisions between technology and workers and between junior and senior employees [17].
Organizational resistance is influenced by inadequate communication between management and employees. In
Slovakia, lack of participation and insuficient incentives hindered adoption. Efective communication
and engagement are critical for frontline employees, management, and end-users [16, 20].</p>
        <p>The behavioral implications of digitally-induced changes, like AI transformation, are complex.
Adjustments to new digital environments introduce new norms and values in the workplace where employees
take on new tasks. Public managers are critical in managing transformation and addressing workforce
changes [8]. These changes require examining automation’s long-term efects on jobs and legislative
implications [21]. Public sector organizations would need to address concerns like job losses,
discrimination, and loss of trust by creating opportunities and building a new generation of AI specialists
[12].
2.4.</p>
      </sec>
      <sec id="sec-2-4">
        <title>Key Facilitators of Sustainable AI Adoption</title>
        <p>
          The long-term success of AI adoption hinges on building in-house technical competencies through
hiring and training individuals skilled in AI use and management. Case studies highlight the need
for permanent teams dedicated to collaborating with AI systems and adapting to non-deterministic
workflows [11]. Training programs and workshops via knowledge-sharing platforms can enhance
technology literacy to position AI as a tool to assist in decision-making and service delivery, as exhibited
by the UK, where local councils equipped employees with necessary skills through targeted training
programs [20]. ”AI trainers” in another case study demonstrated how dedicated interaction with AI
systems requires organizational knowledge and adaptability. Cross-cutting issues such as digital literacy
and resistance to task changes were observed across multiple domains [
          <xref ref-type="bibr" rid="ref4">4, 11</xref>
          ]. Governments can catalyze
AI talent pools by using tailored approaches to address sector-specific challenges. A supportive culture
of learning and continuous skill development reduces resistance to change and enhances the chances of
organizational success [
          <xref ref-type="bibr" rid="ref2 ref6">2, 6</xref>
          ].
        </p>
        <p>AI adoption requires understanding the tradeof between eficiency and efectiveness while addressing
challenges such as limited resources and aging populations. Scholars identify two key approaches:
structural separation, which isolates innovative initiatives from routine operations, and contextual
integration, which embeds innovation within existing frameworks. Managing the tension between
exploration and exploitation is critical to adapting to technological changes efectively [18]. In Europe,
despite governments being pioneers in leveraging AI to enhance service delivery, strategies often lack
specificity in achieving ambitious goals [9]. To build AI-related capacity, many public organizations
are establishing AI or innovation labs. These labs provide experimentation spaces to test emerging AI
technologies and align them with existing practices and routines. This creates a safe environment for
experimentation where organizations address knowledge gaps and refine their understanding of AI
technologies and their broader implications [12, 13]. Creating sustainable AI innovation ecosystems
requires governments to work on their legislative and judicial capacities. Balancing reward and
punishment systems to favor innovators can help address the public sector’s lack of innovation spirit. Strategic
initiatives, such as AI labs and experimentation spaces, also support human-centric AI integration while
being aligned with organizational goals [12, 14].</p>
        <p>Organizations with intermediate AI experience require management support to allocate resources
efectively and transition from exploration to implementation. For state-owned enterprises and
experienced organizations, internal development of AI solutions often leads to greater complexity and the
need for customer-centric approaches. However, increased intra-organizational difusion of AI can
amplify resistance, which might be reduced through change management strategies [16].</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. The Evergreen Need for Human Oversight</title>
        <p>A critical concern is the tendency to over-rely on AI-driven decisions, assuming that they are inherently
superior to human judgment. However, this is almost never the case, particularly when AI decisions
are based on poor-quality or biased data. Increasing the use of AI in staf allocation may lead to
feedback loops where biased data accumulates, reinforcing existing disparities and errors. Widespread
AI applications aimed at tackling fraud may conflict with other fundamental public values, such as
privacy and the right to a fair trial [19, 21]. Significant investments in AI labs focused on automating
decision-making and supporting human decision preparation have been made [13]. In this context,
Stakeholders involved realize that human oversight in AI-driven decision-making is crucial. Human
review mechanisms are usually advised to be in place, especially for decisions with serious implications,
such as those afecting individuals’ lives and liberties [14].</p>
        <p>In the subsequent section, the theoretical lens of this article will be introduced, along with the data
and methods employed in the study.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <sec id="sec-3-1">
        <title>3.1. Theoretical Frameworks</title>
        <p>
          The existing research in this domain employs various theoretical models to understand diferent
dimensions within an organization, such as the POPIT Model (People, Organization, Processes, and IT)
[12], the Public AI Canvas (PAIC) for public value creation [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], Technology - Organization - Environment
(TOE) framework [
          <xref ref-type="bibr" rid="ref6">6, 11</xref>
          ], Status Quo Bias Theory [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], while others use custom models [
          <xref ref-type="bibr" rid="ref2 ref7">2, 7, 14, 21</xref>
          ] for
their studies and some PRISMA based systematic literature reviews were also conducted.
        </p>
        <p>
          To categorize the organizational factors afecting AI adoption, identified with the help of the literature,
this study adopts the Technology-Organization-Environment (TOE) framework as the primary analytical
lens. The TOE framework provides a flexible, multidimensional approach to assess AI adoption when
compared to other existing frameworks. The Technology-Organization-Environment (TOE) framework
assesses technology adoption in a structured way by categorizing interdependent influencing factors that
collectively shape the integration and institutionalization of emerging technologies within organizations
into three domains: technological, referring to the attributes and maturity of the technology itself;
organizational, referring to internal structures, resources, strategy, and readiness; and environmental,
referring to external pressures such as regulatory frameworks, and industry standards. [
          <xref ref-type="bibr" rid="ref6">6, 11</xref>
          ]
        </p>
        <p>More so, this study expands on the “Organizational” component of the framework by introducing
three new sub-levels within the organization, namely “Culture (C), Structure (S), and Individual (I)”.
CSI builds on the TOE framework by ofering a more detailed view of organizations. Instead of seeing
them as a single unit, it breaks them down into culture, structure, and individuals. CSI adds depth and
clarity, making it easier to understand how these factors interact. Factors such as innovation spirit, open
culture, failure tolerance, discretionary freedom, risk-avoiding behavior, agility, and responsiveness of
the organization are a part of the cultural component. Factors including defining strategy, supporting
guidelines, policies laws and regulations, current structures, processes and routines, defined roles and
tasks are part of the structure component. While knowledge, cross-functional skills, and competence.
Status quo inertia, trust in reliability, trust in usefulness and job security are factors categorized as
individual components. Together, this CSI framework integrates and becomes a part of the TOE
framework.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Data Collection</title>
        <p>To understand the impact of the above-mentioned CSI factors, this exploratory research takes a
qualitative approach through the TOE framework. The research aims to identify the most impactful factors
or barriers to artificial intelligence adoption in public sector organizations. To address the research
question, this paper employs a semi-structured interview approach conducted with experts from public
and government services, consultancies, civil servants, researchers, and auditors.</p>
        <p>Purposive and snowball sampling ensured participants met eligibility criteria, defined as professionals
with substantial experience in AI adoption, digital transformation, or governance within the public
sector. Eligibility was based on direct involvement in policy-making, implementation, consultancy,
research, or auditing of AI. Experts were identified via professional networks and institutional afiliations.
Ten were approached, five responded, and three participated. Participants required direct or indirect
experience in researching, designing, implementing, or governing AI solutions in the public sector,
with a focus on the EU. As detailed in Table 1, the final group brought diverse perspectives from within,
outside, and in support roles to public organizations. (see Table 1).</p>
        <p>Data collection occurred in November-December 2024. The flexible, semi-structured format supported
in-depth qualitative insights. Ethical guidelines ensured participant confidentiality, with verbal consent
obtained and anonymity maintained through coded identifiers. Recordings and transcripts, stored via
Microsoft Teams’ secure protocols, were accessed only during the study period.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Data Analysis</title>
        <p>The data collected from semi-structured interviews was analyzed using a thematic analysis approach.
Interview transcripts were systematically reviewed to identify key phrases, patterns, and recurring
concepts, with a focus on listing contributing factors mentioned by participants. These concepts were
then organized into broader themes that emerged from the data. The final themes “organizational,
cultural, and individual factors” are presented in the Results section, and form the basis of the CSI
model used to categorize influences on AI adoption in public sector organizationsTo ensure a structured
categorization process that accurately reflects the perspectives of the interviewees, theme selection,
categorization, and interpretation were conducted manually. </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Methodology Limitation</title>
        <p>The small sample size and potential self-selection bias limit generalizability, possibly excluding voices
less experienced or more resistant to AI. The qualitative approach, while ofering depth, does not provide
statistical representativeness, limiting external validity. Additionally, data collected in late 2024 may
not fully capture ongoing developments in government AI adoption. Social desirability bias remains a
concern, as participants may have portrayed eforts more positively. Nonetheless, these limitations do
not diminish the study’s value, but highlight the need for ongoing research and cross-context validation.</p>
        <p>The next section discusses the analyzed data, exploring emerging themes related to the organization’s
approach to innovation (4.1), individuals (4.2), and the roles of culture and structure (4.3).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <sec id="sec-4-1">
        <title>4.1. Organizational Outlook</title>
        <p>To understand the current state of AI adoption in government entities and their outlook, a recurring
theme emerged: outsourcing. While outsourcing knowledge, infrastructure, and projects is a common
solution, it often creates a void in the organization’s internal capabilities to independently pursue similar
projects in the future, raising concerns about over-reliance. As stated by [A] “They get it from external
sources, which works for a while, but then the external leaves. They do not translate into institutional
capabilities. And if there are no institutional capabilities, then you really do not know how to do it, how to
manage it”. This lack of internal expertise means that organizations often lack in-house expertise and
thus rely on intermediaries with technical backgrounds, “there’s usually a liaison or individual with a
technical background to whom we can translate the requirements” [B]. </p>
        <p>Outsourcing is often chosen due to insuficient internal competencies and the absence of
crossfunctional skills. Although considered cost-efective, it can backfire: “ They often end up stuck in an
impossible situation: they lack infrastructure, can’t use external infrastructure, and then what, puff,
progress stops” [B]. </p>
        <p>While outsourcing can be eficient due to external expertise and resources, it also raises concerns
about excessive dependence on third parties. The questions remain regarding whether governments
should rely so heavily on third parties [B]. Furthermore, while outsourcing can be eficient because
external companies have the expertise, technology, and infrastructure, “it also makes governments overly
reliant on others” [C]. </p>
        <p>A lack of strategic vision in these organizations is evident. Many organizations approach AI
implementation in an ad hoc manner, with no clear long-term strategy: “The first is that there’s no strategic
action, no vision, no view. It’s very ad hoc. Most organizations take this approach when they want to scale
AI ” [A].Others fall into “mindless experimenting” or complete inaction, both problematic. Resistance to
change, outdated systems, and legacy infrastructure impede integration “if they’re too old, such as legacy
systems, then you probably wouldn’t be able to connect those two dots”, [B]. While some say, it’s time
that may be the impeding factor, prompting administrations to frequently opt for shortcuts instead, the
key challenge with these innovation teams is the lack of time [C]. Rather than investing in long-term
structures that could better fit the project, “ administrations and governments often look for shortcuts ” [C].
People within these institutions play a crucial role in the implementation process, making them a
vital factor. However, resistance often arises due to fear and uncertainty. As per [B], the primary
concerns within this sector itself are the trust issue, as “accountability becomes an issue” and both
internal and external stakeholders require assurance that AI technologies serve as assistants rather
than decision-makers. People from within and outside require assurance that these technologies are
not necessarily acting as decision-makers but as assistants to the decision-maker: “whoever that might
be, which will always be a human” [B], even more so, when dealing with decision-making dilemmas. </p>
        <p>While AI-recommended decisions may sometimes feel mandated, similar risks exist when a manager
dictates actions without question [A]. Moreover, there may be hints of AI-recommended decisions
being mandated, but the same risk exists if managers act in a more imperative way: “It’s not the system
itself that’s the problem; it’s how leadership interacts with it, and that can be far more dangerous…” [A]. </p>
        <p>To mitigate this resistance, collaboration between domain experts and technology builders is essential.
Despite this, within the same group, there are multidisciplinary experts with both technical and
nontechnical skills who can drive successful implementation. Thus, while the challenge lies with people,
the solution also stems from them. The process is people-centric, with solutions emerging organically
from within, rather than being imposed externally. There is an importance of having internal expertise,
and individuals who can identify opportunities and experiment with technology, while also recognizing
that AI adoption requires more than just technical knowledge [A]. Furthermore, people inside the
institutions with the operational know-how and the talent for opportunities-spotting and experimenting
are key [A]. Having such experiments in-house is thus instrumental: “Understanding technology requires
more than techies. An organization needs a multidisciplinary team, people who understand both technology
and organizational strategy” [A]. Similarly, when people themselves propose AI use cases rather than
having solutions imposed on them, engagement and interest naturally increase: “We are not imposing
any solutions on them, but they themselves are coming up with the use cases of these technologies” [B].
Then, they become more interested because they are the ones who proposed it [B]. People lie at the
core of this: “Tools and technology are merely components of a larger process aimed at meeting people’s
needs, not the entirety of the process” [C].</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.3. Cultural (C) and Structural (S) Shifts</title>
        <p>The interviews highlighted that cultural resistance and bureaucratic inertia often hinder innovation and
its adoption in public sector organizations, rooted in the inherent traits of bureaucracy. Public sector
employees frequently follow repetitive routines, making them reluctant to embrace change. Their
skepticism can be a barrier, which is why it is essential to engage them in activities that create a cultural
shift: “ Now the system more or less becomes your assistant in the decision-making process rather than
dictating what to do” [C]. If people are required to follow the system, this will likely be followed by
resistance and to avoid it, it’s essential to have good collaboration with domain experts and builders
[C]. </p>
        <p>A mindset shift is necessary given the fact that many public sector organizations are resistant to
change [B]. Moreover, as respondents mentioned, most projects are undertaken for political reasons or
as publicity stunts, rather than as part of a cohesive strategy: “Governments often fall into a pseudo sense
of digitization, implementing small, isolated AI projects without integrating them” [B]. </p>
        <p>Therefore, a need for a cultural or mindset shift gains prominence, which itself varies significantly
based on organizational size as smaller organizations find it easier to onboard everyone due to fewer
hierarchical barriers, depending on the organizational culture: “If the organization has very short lines
or hierarchy, […] then you can have all the people on board” [A]. The issue of organizing a collaborative
efort is further exacerbated with the growth of the group’s size [A]. A delicate balance between
experimentation and implementation being a tedious task is further highlighted: “It’s a fine line ” [B].
Thus, this difusion of innovation, particularly people’s acceptance of it, “is a very dificult task ” [B].</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.4. Sustainable Implementation Strategies</title>
        <p>Innovation, being highly context-specific, requires careful adaptation. Interviewees emphasized the
importance of engaging innovators and early adopters within the organization to serve as anchors and
catalysts for change. Establishing a team of enthusiastic individuals who are eager to be part of the
innovation process can be a crucial first step. Once people see tangible results, they are more likely to
support and adopt new initiatives: “Probably they’d jump on the wagon and be more on-board. But yeah,
that’s it, that’s really it” [B]. </p>
        <p>While [A] believes that truly innovative, exciting applications are often unique to an organization.
They come to light when experts identify opportunities and decide to act on them, decide to do something
about it. “Figure out what they need. You’re trying to assist with a specific business process or task they
perform daily” [A]. Though it might seem straightforward, issues often arise: “ Even so, most people want
their jobs to be easier ” [A]. </p>
        <p>A key takeaway for successfully implementing sustainable strategies is, to begin with small pilot
projects as proof of concept. This approach allows for combining experimentation with feasibility and
sustainability assessments. The focus can be on identifying and addressing use cases aligned with the
specific organizational needs and context of the local government.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>The findings of this study provide a perspective on the systematic challenges in adopting artificial
intelligence within government entities. Broadly, it reveals the relationship between technology and
governance through the lens of human interactions. Outsourcing, while addressing immediate skill
gaps, creates dependency. Therefore, it should not be a transactional fix but a strategic move for
capacity-building through knowledge transfer agreements and upskilling programs. These provisions
should be embedded in procurement contracts.</p>
      <p>Trust emerges as both a psychological and structural barrier, influenced by accountability and
transparency frameworks. Participatory decision-making and feedback loops during implementation
can build this trust and enhance societal value. Establishing interdepartmental steering groups or
advisory boards can lend legitimacy. Bureaucratic inertia, often seen as an innovation barrier, can
be repurposed for sustainable change. Its predictability and scalability ofer potential for systematic,
equitable AI adoption. Public sector leaders should be trained not just in AI, but also in change
management and digital-era leadership. People remain both the greatest barrier and solution, translating
technical potential into human-centric outcomes is key. Cultural architects in leadership roles can
reshape mindsets.</p>
      <p>Change begins with small steps. Pilot projects and iterative learning can formalize innovation hubs
within organizations. Institutional “innovation sandboxes” should be considered to test and scale new
ideas. This study shows the need for systemic thinking in AI adoption. Isolated interventions are
insuficient. AI should be integrated as part of an interdependent system of people, technology, policy,
and society. Addressing root causes and anticipating ripple efects are essential for sustainable public
sector AI initiatives. The study cautions against fragmented eforts and supports a whole-of-government
approach.</p>
      <p>
        There is room for expansion. Limited to three interviews, the study’s findings may lack
generalizability. A larger qualitative study, especially as an organizational case study within the EU, could
deepen context-specific understanding [13]. Including multiple organizations and contexts outside the
EU could enhance global relevance and allow exploration of long-term efects and cross-departmental
dynamics [17]. Future research should also explore ethical AI use, innovation legislation, and cultural
influences on adoption. Communication, trust, and transparency are critical long-term enablers that
ofer substantial opportunities for further study [
        <xref ref-type="bibr" rid="ref2">2, 14, 15</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>The successful AI adoption in the public sector is shaped by cultural, structural, and individual factors.
Organizationally, lack of in-house expertise and reliance on outsourcing hinder sustained adoption.
Structurally, outdated systems and absent strategic vision lead to fragmented eforts. Culturally,
resistance and skepticism toward AI decision-making highlight the need for participatory models and
trust-building mechanisms. Multidisciplinary teams and engaged leaders are central to overcoming these
barriers. A people-centric approach bridges technical potential with practical outcomes. Small-scale
pilots, iterative learning, and institutionalized innovation hubs support enduring adoption.</p>
      <p>The study reafirms that AI adoption is not just a technical challenge but an organizational one.
These challenges, however, are surmountable. People-centric strategies, leadership, and trust-building
will determine whether adoption succeeds or stagnates. Outsourcing, while helpful short term, does
not build long-term capabilities. The focus needs to shift to internal capacity-building, cross-functional
collaboration, and structured change management. AI in government should move beyond
experimentation toward sustainable integration. A culture of innovation, institutional learning, and participatory
governance is crucial. AI’s efectiveness depends on its alignment with governance structures and public
service values. As digital transformation accelerates, the need for strategic vision, leadership support,
and workforce adaptability grows. Future research should explore long-term impacts, intergovernmental
comparisons, and the role of human oversight in AI decision-making.</p>
      <p>Successful AI adoption demands more than readiness. It requires innovation aligned with human
and procedural systems. Public leaders can build trust and transparency while investing in skilled,
cross-functional teams and small-scale pilots to build momentum. Outsourcing should serve long-term
capacity development. Ultimately, sustainable technological progress relies on people-centricity. A
resilient public sector ecosystem is built through iterative learning, mindful experimentation, and
participatory decision-making.</p>
      <p>Disclosure of Interests. The authors have no competing interests to declare that are relevant to the
content of this article.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Almatrodi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alojail</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Organizational Resistance to Automation Success: How Status Quo Bias Influences Organizational Resistance to an Automated Workflow System in a Public Organization</article-title>
          .
          <source>Systems</source>
          <volume>11</volume>
          (
          <issue>4</issue>
          ) (
          <year>2023</year>
          ). https://doi.org/10.3390/systems11040191
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bartolotta</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gritt</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          : Artificial Intelligence:
          <article-title>Threat or ”Colleague”? Exploring Managers' Perceptions of AI in Organisations</article-title>
          .
          <source>In: UKAIS</source>
          <year>2021</year>
          .
          <article-title>(</article-title>
          <year>2021</year>
          ). https://aisel.aisnet.org/ukais2021
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bosch</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Müller</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          :
          <article-title>Artificial Intelligence for Interoperability in the European Public Sector: An Exploratory Study</article-title>
          . (
          <year>2023</year>
          ). https://doi.org/10.2760/633646
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Butt</surname>
            ,
            <given-names>J. S.:</given-names>
          </string-name>
          <article-title>A Comparative Study About the Use of Artificial Intelligence (AI) in Public Administration of Nordic States with Other European Economic Sectors</article-title>
          . (
          <year>2024</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Fatima</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Desouza</surname>
            ,
            <given-names>K. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buck</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fielt</surname>
          </string-name>
          , E.:
          <article-title>Public AI Canvas for AI-enabled Public Value: A Design Science Approach</article-title>
          .
          <source>Government Information Quarterly</source>
          <volume>39</volume>
          (
          <issue>4</issue>
          ) (
          <year>2022</year>
          ). https://- doi.org/10.1016/j.giq.
          <year>2022</year>
          .101722
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Felemban</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sohail</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruikar</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Exploring the Readiness of Organisations to Adopt Artificial Intelligence</article-title>
          .
          <source>Buildings</source>
          <volume>14</volume>
          (
          <issue>8</issue>
          ) (
          <year>2024</year>
          ). https://doi.org/10.3390/buildings14082460
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Giest</surname>
            ,
            <given-names>S. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klievink</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>More than a Digital System: How AI is Changing the Role of Bureaucrats in Diferent Organizational Contexts</article-title>
          .
          <source>Public Management Review</source>
          <volume>26</volume>
          (
          <issue>2</issue>
          ),
          <fpage>379</fpage>
          -
          <lpage>398</lpage>
          (
          <year>2024</year>
          ). https://doi.org/10.1080/14719037.
          <year>2022</year>
          .2095001
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>