<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>M. A. Salahddine);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Understanding User Perceptions of AI-Enabled ERP Systems: A Qualitative Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hajar Maimouni</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>My Abdelouhab Salahddine</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Xavier Franch</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National School of Business and Management of Tangier, Abdelmalek Essaadi University</institution>
          ,
          <addr-line>Boulevard Moulay Rchid, Airport Road, P.O. Box 1255, 90000 Tangier</addr-line>
          ,
          <country country="MA">Morocco</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Universitat Politècnica de Catalunya</institution>
          ,
          <addr-line>08034 Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>As AI reshapes enterprise systems, its success lies not in algorithms alone but in the eyes of those who use it daily. This study investigates how professionals perceive AI-enabled ERP systems, what makes them trust, hesitate, or adopt. Through semi-structured interviews with experienced users in diefrent managerial positions, we explore how explainability, usability and automation inform user confidence and perceived value. Participants voiced optimism about automation's ability to reduce errors and enhance performance, but insisted on clarity, auditability, and human oversight as non-negotiable values. Trust, we found, is neither instant nor absolute; it builds through repeated exposure, transparent logic, and peer validation. Our conceptual framework, grounded in TAM and enriched with trust and transparency theories, served both as guide and lens throughout the inquiry. The findings highlight that intelligent systems are adopted not because they work, but because they are understood, trusted, and made to work with people..</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI-enabled ERP</kwd>
        <kwd>Explainability</kwd>
        <kwd>Usability</kwd>
        <kwd>Automation</kwd>
        <kwd>Trust</kwd>
        <kwd>Technology adoption</kwd>
        <kwd>Qualitative research</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        1. Introduction
Enterprise Resource Planning (ERP) systems have long served as the digital backbone of organizational
operations. Today, they are undergoing significant transformation through the integration of artificial
intelligence (AI), which adds new layers of automation, adaptive features and predictive capabilities
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Building on this evolution, AI-enabled ERP systems are not only optimizing workflows and
supporting informed decision-making, by making processes more intelligent, agile, and eficient [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
but as they gain autonomy and complexity, they can also become less transparent and intuitive,
bringing new concerns about usability, transparency and user trust [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In this shifting landscape,
these systems’ efectiveness hinges not just on technical capabilities but on how users engage with and
make sense of these technologies in everyday practice [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Although existing research has focused on
algorithmic performance, it tends to overlook the lived experiences and perceptions of professionals
who interact with these systems routinely. To address this gap, this study explores how users perceive AI
functionalities in ERP systems, and how factors such as explainability, usability, and automation shape
their trust, perceived value, and adoption behavior. Grounded in the Technology Acceptance Model
(TAM) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and insights from trust and transparency research, we developed a conceptual framework to
guide our qualitative inquiry. Through interviews with experienced users, this study investigates the
conditions under which intelligent ERP technologies are adopted, not just for their technical capabilities,
but because they are perceived as clear, trustworthy, and efectively aligned with everyday practices.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Theoretical foundations and conceptual background</title>
      <p>2.1.</p>
      <sec id="sec-2-1">
        <title>Artificial intelligence integration into ERP systems</title>
        <p>
          The integration of AI into ERP systems marks a profound reimagining of enterprise technologies,
not just in what they do, but in how they behave, learn, and collaborate with their users. ERP systems
are evolving into intelligent ecosystems that respond dynamically to organizational complexity. AI
doesn’t only enhance these systems; it reshapes their purpose, pushing ERP from a back-office record
keeper [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] to a proactive decision support agent, capable of autonomous process execution [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
        </p>
        <p>
          The enhancements AI brings aim to make enterprise systems more responsive, less labor
intensive, and more strategic in scope [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. As Yathiraju [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] emphasizes, AI models learn from
historical and real time data, optimizing performance across tasks and enhancing operational
foresight.
        </p>
        <p>Despite these benefits, the complexity of AI models introduces significant interpretability
issues. AI often operate as a “black box”, relying on complex and non-linear algorithms that
obscure their decision-making processes.
2.2.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Explainability as a driver of perceived transparency and trust</title>
        <p>
          Within AI-Enabled ERP systems, explainability has emerged as a foundational quality attribute. It is
about making the system’s reasoning visible and understandable, and its ability to provide
meaningful explanations for its outputs, allowing users to comprehend how decisions are made or
actions are carried out [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. When users can follow the logic behind, the system becomes easier to
trust. Recent research emphasizes that explainability is not only a technical feature but a critical
enabler of perceived transparency, which reflects the openness, visibility and interpretability of AI
processes from the user’s perspective [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
        </p>
        <p>
          This relationship is significant because transparency functions as a channel to trust, and a
transparent system helps build user confidence, especially in areas where trust in the system is
key [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. This link is reinforced by the work of Esmaeilzadeh [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. In contrast, unclear systems
that operate as “black boxes” can increase uncertainty and hesitation, which eventually can
hinder users' engagement and acceptance.
        </p>
        <p>Thus, explainability strengthens transparency, which in turn fosters trust, together creating
a pathway toward user acceptance and adoption of AI-Enabled ERP systems.
2.3.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Usability and user perceptions: ease and usefulness</title>
        <p>
          Usability goes beyond interface design quality or visual layout; it reflects how intuitively users can
navigate and interact with a system. Being a factor closely tied to user experience, usability feeds
directly into two core concepts from the Technology Acceptance Model (TAM) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]: how easy the
system is to use, and how helpful it appears to be for accomplishing tasks. Moreover, Mlekus et al.
[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] highlight that when a system is easy to operate, use, understand, and trust lead users to assess
them as both less effortful and more beneficial to their work performance.
        </p>
        <p>
          This connection is also supported by findings from Harsanto et al. [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], who applied the TAM
framework to digital services adoption, and found that systems perceived as user-friendly and
adequate to practical needs, were more likely to be adopted and retained. Their empirical model
confirms that intuitive design makes a system easier to use and makes it easier to see the value
in using it.
        </p>
        <p>In this sense, usability impacts the users’ assessment of AI-enabled ERP systems, and
contributes to their acceptance and adoption intention.</p>
      </sec>
      <sec id="sec-2-4">
        <title>Automation level of AI functionalities and perceived usefulness</title>
        <p>
          The level of automation in AI-enabled ERP indicates the ability of the system to handle tasks or make
decisions without any human intervention or input. This feature changes the perception of users
regarding a system’s usefulness level. As shown by Na et al. [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], the more a system provides help
with operations’ efficiency, decision making, and workload reduction, the more it is seen as valuable
and useful. It boosts satisfaction and encourages adoption. Also, the participants of the research
pointed out that automation is a key driver of efficiency, especially in an environment where heavy
data is being processed.
        </p>
        <p>
          Moreover, According to Bademosi and Issa [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], automation or autonomous technologies are
appreciated when they deliver concrete benefits such as cost reduction, and smoother processes.
Nevertheless, they insist that automation has strong potential only if it is perceived to be reliable,
trustworthy and specific to the context it’s being used in.
        </p>
        <p>These findings point to a common thread, automation provided by AI functionalities strongly
contribute to users’ evaluation of usefulness. Systems that provide balanced levels of autonomy
and user control are more likely to be perceived as useful, particularly when users recognize
tangible improvements, accuracy, and support.
2.5.</p>
      </sec>
      <sec id="sec-2-5">
        <title>Perceptions and trust as determinants of acceptance and adoption intention</title>
        <p>
          Perceiving a system as useful and easy to use is a fundamental condition for its acceptance. Yet, when
it comes to systems marked by autonomy and high impact, trust is just as essential. As AI
functionalities are expected to take on greater responsibilities in enterprise processes, trust operates
alongside perceptions that determine user acceptance. Together, perceived usefulness, ease of use,
and trust shape user perceptions and influence behavioral intention [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], while models like TAM
focus on usefulness and usability as key drivers of adoption intention. However, recent research
shows that those positive perceptions alone are not sufficient for acceptance, but it is also important
to establish trust to reduce uncertainty and encourage actual use [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conceptual framework of the study</title>
      <p>This study is guided by a conceptual framework that structures the exploration of how users perceive
AI-Enabled ERP systems. Grounded in the Technology Acceptance Model (TAM) and complemented
by the theoretical background delineated in the previous section, the framework aims to emphasize
the mechanisms through which three technological features, namely explainability, usability and
automation level, shape users’ perceptions and acceptance. Rather than testing this framework
quantitatively, it serves as a sensitizing structure for the qualitative inquiry, informing the design of
interview questions and guiding the thematic coding process. This framework articulates how users
interpret the functionalities of AI systems in ERP settings and how these interpretations translate
into acceptance or resistance.
3.1.</p>
      <sec id="sec-3-1">
        <title>Overview of main concepts</title>
        <p>The model identifies three primary technological characteristics of AI-Enabled ERP systems:


</p>
        <p>
          Explainability, defined as the degree to which the system’s logic and decision processes can
be understood by users, is hypothesized to shape perceived transparency [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
Usability, reflecting how easy and functional the system is for end users, influences
perceived ease of use [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          Automation level, referred to the extent to which the system operates independently
without user intervention, is a factor expected to affect perceived usefulness [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
        </p>
        <p>
          These perceptions in turn influence trust [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], and ultimately shape acceptance and adoption
intention [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] [18] [19] [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
3.2.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Conceptual relationships and logic</title>
        <p>
          Fig. 1 shows the conceptual framework of our study. This framework draws upon both classical and
contemporary foundations of technology acceptance, relevant to AI-enabled ERP systems. It follows
a layered logic, where explainability serves as the foundation for transparency [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], which in turn
conditions trust in the system. Trust subsequently acts as a mediator in forming favorable user
attitudes [19] [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Similarly, usability conditions perceptions of ease [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], while automation
contributes to perceived usefulness through efficiency and reduced operating burden [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ].
        </p>
        <p>
          While Davis’s (1989) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] original theory remains fundamental to understanding technology
acceptance, recent developments, such as the integration of explainability and automation,
reflect the specific affordances and risks of intelligent systems. This model acknowledges that
newer elements such as transparency and automation level modify these classical constructs in
the AI context. Ultimately, all intermediate perceptions converge toward influencing acceptance
and adoption intention, which remains the central focus.
3.3.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Role in guiding the exploration</title>
        <p>The framework served as an analytical lens to inform data collection and interpretation. Interview
protocols were structured to capture users’ subjective experiences related to system transparency,
ease of interaction, perceived utility, and trust. During data analysis, it enabled a theoretically
informed coding structure while still allowing inductive insights to emerge.</p>
        <p>Instead of limiting the scope of findings, the framework served to anchor the study in real
user experiences while building on established theory. In this way, this model is a flexible tool
for organizing and interpreting the complexity of user perceptions in a dynamic technological
context.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>4.1.</p>
      <sec id="sec-4-1">
        <title>Research design</title>
        <p>This study adopts a qualitative, exploratory research design to investigate user perceptions of
AIEnabled ERP systems. Given the novelty and complexity of integrating AI into ERP systems,
qualitative inquiry is well-suited to capture how users interpret, evaluate, and respond to these
evolving technologies. The goal is to build rich descriptions of the realities of organizational practice.</p>
        <p>According to the formulated framework, the research question of our study is formulated as
follows:</p>
        <p>RQ1: How do users perceive and experience the explainability, usability, and automation of
AI functionalities in ERP systems, and how do these perceptions shape their trust and willingness
to adopt such systems?
4.2.</p>
      </sec>
      <sec id="sec-4-2">
        <title>Data collection</title>
        <p>Data were collected through semi-structured interviews with five participants selected through
purposive sampling. All selected participants (1) had established experience in their roles, (2)
regularly used ERP systems, and (3) held managerial or decision-making positions (see Table 1).
These inclusion criteria ensured participants had sufficient expertise and contextual familiarity
to offer informed reflections on AI functionalities within ERP environments. An interview guide
was developed around the key dimensions of the conceptual framework. Interviews were
conducted via secure online platforms. Each session lasted 35-50 minutes, and were transcribed
and anonymized with the participants’ consent. An excerpt of the interview guide is shown in
Table 2.</p>
        <sec id="sec-4-2-1">
          <title>Usability and ease of use</title>
          <p>Examples of questions
• What factors would make you trust AI functionalities
embedded in ERP systems?
• Are there any concerns or doubts you have when it comes to
trusting AI decisions in ERP?
• How does the transparency of AI decision-making in ERP
systems affect your confidence in using them?
• How important is it for you to understand how the AI
functionalities make decisions within the ERP system?
• What would you expect regarding the ease of use if AI
functionalities were added to your ERP system?
• Have you found current invoice processing interfaces
efficient? How could AI improve, or complicate that
experience?</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>Data analysis</title>
        <p>The interview data were examined using thematic analysis, following the six-phase approach
outlined by Braun and Clarke (2006) [20]. The process involved familiarizing with the data, coding
inductively and deductively, identifying and refining themes, and linking them to the theoretical
framework. Manual coding offered flexibility to capture unanticipated insights and allowed the
analysis to evolve in response to the data.
4.4.</p>
      </sec>
      <sec id="sec-4-4">
        <title>Ethical considerations</title>
        <p>Before data collection, participants were informed about the study’s objectives and procedures, and
about their rights of withdrawing at any point during the interview process.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Findings</title>
      <p>5.1.</p>
      <sec id="sec-5-1">
        <title>Thematic overview</title>
        <p>The qualitative analysis of five participant interviews revealed six major themes that reflect user
perceptions of AI-enabled ERP systems. These themes represent critical factors shaping the user
experience and influencing trust, perceived value, and adoption intention. Participants, who occupy
managerial roles and actively use ERP systems in their day-to-day work, offered diverse yet
converging perspectives on the integration of AI features into enterprise processes. The developed
themes are shown in Table 3. These themes emerged both deductively from the conceptual
framework and inductively through user narratives. The frequency and salience of each theme are
reflected across participants’ roles and experiences.</p>
        <sec id="sec-5-1-1">
          <title>Adoption depends on explainability, training, managerial support, and system configurability</title>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>Theme 1: Explainability and Transparency</title>
        <p>A central theme across all interviews was the importance of explainability. Participants consistently
highlighted the need to understand how AI makes decisions within ERP systems, particularly when
those decisions impact financial, operational, or compliance processes.</p>
        <p>Participant 1 clearly stated “I need to see the reasons behind before trusting any decision… we need
to have the explanation to the management team. We cannot only say that the system is saying that… I
need to know the reasons behind any analysis or any decision.” For her, transparency is not only about
seeing outcomes but understanding the underlying rationale, what data was used, how it was
processed, and how the conclusion was reached.</p>
        <p>This traceability was echoed by Participant 5 as well, who explained: “Of course transparency,
is key. More the model is transparent more I will trust it. That includes where the data comes from and
how it is processed and how it gets to the final results.” He acknowledged that while technical
transparency is not always essential for daily use, it becomes critical when AI recommendations
diverge from expectations. “If I can’t understand how or why a decision was made, it becomes very
hard to rely on it with confidence”, he noted.</p>
        <p>Explainability was seen not only as a technical feature but also as a condition for transparency
and ultimately trust. Several participants emphasized that explainability is essential for
accountability, especially in contexts where the system’s recommendations might conflict with
organizational policy or personal judgment as Participant 2 stated “Yes, because I think everything
in our daily tasks, need to be approved and reviewed by someone else. So, peer review is a must.”. These
insights reveal a clear priority: users welcome AI but expect it to be transparent, traceable, and
aligned with human logic. For them, explainability is not optional, but essential to responsible
and trusted adoption.
5.3.</p>
      </sec>
      <sec id="sec-5-3">
        <title>Theme 2: Usability and interface simplicity</title>
        <p>Participants expressed a strong preference for ERP systems that are intuitive, clear, and minimize
the cognitive load. This theme was particularly salient among users who perform tasks under time
pressure or handle large volumes of data. They also emphasized that while ERP systems may seem
complex at first, they become manageable with familiarity.</p>
        <p>Participant 1 shared: “at the beginning, it may be complicated to get familiar with an ERP system...
but then... it’s not that hard to know how to use the ERP”. Similarly, Participant 2 noted: “IFS 10 can be
complex... and requires a proper training. But for now, I’m used to it, and I find it very simple... in my
daily tasks”.</p>
        <p>AI was seen as a way to enhance usability, provided it reduces, and not adds, complexity.
Participant 4 highlighted that “three or four necessary clicks to reach… a window or a menu… can be
modeled easier” and that AI “is an opportunity to make it easier… and user friendly”. He added that it
could “eliminate the repetitive tasks or the repetitive clicks… it’s annoying a bit…”, but concluded that
if “the tool is user-friendly and easy to integrate into what we already do… I’d be all for it” as long as it
“adds value without creating more complexity”.</p>
        <p>Yet simplicity alone isn't enough. Participant 5 warned, “only adding the functionalities is not
sufficient, training people and engaging is more important”.</p>
        <p>Usability was also linked to the perceived ease of use, echoing the Technology Acceptance Model
(TAM). Participants appreciated systems that offered clear dashboards, visual cues, and customizable
views. Poor usability was seen as a barrier to adoption, regardless of how advanced the AI features
were, as Participant 4 affirmed “what's really exciting is how AI is making ERP systems more user
friendly and easy to use”.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Theme 3: Perceived usefulness of automation</title>
        <p>The perceived usefulness of AI-driven automation emerged as another major theme. Participants
generally welcomed automation for repetitive or low-value tasks, highlighting benefits such as
speed, error reduction, and operational consistency.</p>
        <p>For example, one participant described AI in ERP as “super helpful on the automation side of
the transactional steps that any employee in any department needs to go through… it will be super,
super helpful in that regard.” (Participant 4).</p>
        <p>Similarly, participants appreciated how AI could take over repetitive tasks. Participant 5 affirmed:
“Yes of course AI can help every job in his daily work, we all have some repetitive things where AI can
play a pivot role.”</p>
        <p>Another participant expected “time saving automation, since we have many repetitive tasks,
improve accuracy, … reduce manual invoice match and detect anomalies in vendor payments.”
(Participant 2).</p>
        <p>However, participants stressed that automation must be meaningful and context-aware. Blind
automation without business logic or adaptability was viewed as risky and frustrating.
5.5.</p>
      </sec>
      <sec id="sec-5-5">
        <title>Theme 4: Trust, oversight, and risk concerns</title>
        <p>Trust was a recurrent concern throughout the interviews. While participants recognized the
potential of AI, they were reluctant to rely on it without human oversight. Trust was closely tied
to explainability, system performance, and the ability to intervene when necessary.</p>
        <p>For example, Participant 1 admitted: “At the beginning, I will not trust it, to be honest.” Initial
skepticism was tied to understanding how AI arrives at its outputs. Participant 5 noted that “…if I
can’t understand how or why a decision was made, it becomes very hard to rely on it with confidence”,
highlighting the need for clear reasoning behind AI decisions to maintain trust.</p>
        <p>Participants emphasized the importance of maintaining a human-in-the-loop approach. In
parallel, concerns were raised about the possibility of system errors and the lack of clarity over who
is ultimately responsible for decisions taken based on AI recommendations.</p>
        <p>Participant 3 warned, “The risks I associate with AI include false results… and relying too heavily on
AI without a human review… we have to… keep a human level of control.” In line with this, Participant
4 stated, “Of course, I would worry about over-reliance on AI. You can’t trust the suggestions that he
gives 100%.” Participant 2 likewise pointed to “data security concerns or over reliance on automated
suggestions, like losing the human oversight that is often critical in finance… risks would make me more
cautious and … rely on AI only as a support tool rather than a decision maker.”</p>
        <p>These concerns, especially around control and error management, often acted as barriers to
adoption, signaling the need for clearer oversight mechanisms and the ability for users to retain final
decision-making authority.
5.6.</p>
      </sec>
      <sec id="sec-5-6">
        <title>Theme 5: Adoption conditions and expectations</title>
        <p>Finally, participants articulated a set of conditions under which they would feel confident adopting
AI-enabled ERP systems. A common theme was the need for adequate preparation and user training.</p>
        <p>Participant 1 explained that “The phase of testing or training is very, very important because it can
influence you to build the trust or it can guide you to not use these solutions at all. So, this is the first
point of taking the decision to even accept to use this kind of solutions or not.”</p>
        <p>Participant 3 similarly emphasized that “every new thing needs training to master.” Participant 5
agreed that simply introducing AI is not enough, stating “Only adding the functionalities is not
sufficient, training people and engaging is more important… if these features are integrated but
employees don’t understand how to use them or don’t see their value, they’ll just be ignored.” Ensuring
users are well-trained, comfortable, and see the personal value in the new tools was viewed as critical
for successful adoption.</p>
        <p>In addition to training, participants expected to see tangible improvements and a smooth
integration of AI into their workflows. For instance, Participant 4 said, “Honestly, what would really
push me to adopt them is seeing that they actually help me save time and improve how I work. If the AI
can handle repetitive tasks, like processing standard POs, matching invoices, or generating quick reports,
that’s a big plus.”</p>
        <p>Participant 2 recommended a careful rollout, noting “I think AI adoption should be a gradual and
accompanied by proper trainings and transparency first, and maybe for detecting anomalies, forecasting
payments or even optimizing processes, it will be a great tool.”</p>
        <p>Rather than rejecting AI, users expressed a conditional willingness to adopt, dependent on
institutional safeguards and personal empowerment.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>The study sheds light on the nuanced perspectives of ERP users facing the integration of AI features
into their daily work routines. The findings reveal that participants across various roles expressed a
cautious but growing interest in AI-enabled ERP systems, particularly in how these technologies can
reduce workload and enhance operational efficiency. Many viewed AI as a valuable assistant in
handling repetitive or time-consuming tasks, such as invoice matching, purchase order generation,
and anomaly detection, or any task that currently demands manual effort and delay. For some,
especially those in procurement and finance, AI’s ability to perform such actions in seconds rather
than hours signaled a clear shift toward more agile and responsive processes. However, this optimism
was balanced by a set of consistent concerns: AI must be transparent, explainable, and always under
human control.</p>
      <p>Rather than approaching AI as a replacement for human work, participants viewed it as a tool
that must complement professional expertise. There was a strong preference for AI that supports
decision-making, not one that attempts to fully automate them. This was particularly evident in areas
like supplier negotiation or financial approvals, where context, nuance, and human interaction
remain essential. One participant referred to negotiation as “an art” that AI should not replicate,
suggesting that while AI can assist, it shouldn’t attempt to take over roles where human intuition
and experience are key. The consensus was clear: automation should empower professionals, not
sideline them.</p>
      <p>A central condition for trusting AI recommendations was the ability to understand and explain
them. Explainability was not treated as a purely technical feature but as a relational function, one
that enables users to trace logic, justify actions, and feel confident in the outcome, especially in
highstakes or regulated environments like finance. Many stressed the need to see how data was processed
and what logic drove the outcomes. When that transparency was missing, trust in the system quickly
diminished. On the other hand, AI tools that made their process visible, or at least offered clues about
the data and assumptions behind a suggestion, were considered far more trustworthy and usable.</p>
      <p>Usability itself emerged as a critical component in this relationship. Participants did not separate
ease of use from system credibility. ERP platforms are already known for their complexity; adding
AI features that are difficult to navigate or understand would only make adoption harder. AI
integration was most welcomed when it simplified processes, reducing unnecessary steps,
anticipating user needs, or highlighting relevant data without requiring extensive manual queries.
Some described ideal scenarios in which AI could automatically detect missing invoice fields, suggest
corrections, or even generate pre-structured communication for suppliers, seamlessly, without
complicating the interface. In such cases, usability was not considered a bonus feature but a
necessary condition for acceptance.</p>
      <p>As AI systems take on more autonomy, users increasingly feel the need to stay in control. Many
participants spoke about starting with a cautious approach, double-checking every AI suggestion
before relying on it. Trust didn’t come instantly; it had to be earned through repeated, accurate
results over time. For some, trust was a gradual accumulation of positive experience, not something
that could be assumed from the outset.</p>
      <p>In the end, adoption of AI-enabled ERP systems was not described as a purely technical issue. It
was shaped more by user alignment, transparency, usability, and usefulness of the automation tools
themselves. Participants were not asking for perfection but for clarity, and a design approach that
keep the human user at the center. When these conditions are met, when AI is accurate, explainable,
easy to use, and framed as a collaborative tool, users are not only willing but eager to adopt it.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Threats to validity</title>
      <p>Following Wohlin et al. [21], four validity dimensions were considered to ensure the rigor and
credibility of this study. Construct validity was addressed by designing the interview guide based on
the Technology Acceptance Model (TAM) and recent AI-TAM extensions (trust, explainability,
usability, automation). Terms were clarified and piloted to enhance consistency, though self-reported
data may introduce minor interpretive variation. Internal validity was strengthened through neutral
questioning, consistent protocols, and concrete examples, yet contextual factors such as company
culture or prior ERP experience may still have influenced responses. External validity is limited due
to the small purposive sample; however, the findings aim for analytic rather than statistical
generalization, offering transferable insights into TAM and trust-based adoption. Conclusion validity
was reinforced by systematic coding, triangulation of statements, and use of participants’ own words
to minimize bias. Interpretations remain cautious, describing observed patterns consistent with prior
TAM and AI research.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusion</title>
      <p>As enterprise technologies grow more autonomous, their success will rely not only on performance
but on their ability to align with users’ expectations, workflows, and values. this study highlights
that the adoption of AI-enabled ERP systems in managerial contexts depends on more than technical
sophistication. Core system attributes shape users’ perceptions of transparency, ease of use, and
efficiency, which in turn influence trust and ultimately the intention to adopt. Trust emerged as a
critical bridge between system design and user engagement, which stresses the importance of
making AI functionalities understandable, reliable, and supportive of human judgment. This calls for
a design that must remain fundamentally human-centered, transparent, responsive, and accountable.</p>
      <p>Future research may further explore sector-specific dynamics, longitudinal changes in trust, and
the role of institutional culture in shaping AI adoption trajectories. Studies could also expand these
insights by examining how these adoption factors evolve over time, particularly as users become
more familiar with AI-driven systems. Additionally, comparative research across industries or
cultural contexts could shed light on how sector-specific needs and organizational norms shape the
adoption intention. Finally, including perspectives from IT developers may also clarify where
strategic and technical priorities align or diverge in AI integration.</p>
      <p>Acknowledgements
The research work behind this paper was conducted with the support of the Centre National de la
Recherche Scientifique et Technique CNRST under the "PhD Associate Scholarship - PASS" program
and within the context of an Erasmus mobility program, both of which contributed to the successful
completion of this work. This work was also supported by the project PID2024-156019OB-I00 under
funding schema MICIU /AEI /10.13039/501100011033 / FEDER, UE. We also extend my sincere
gratitude to all interview respondents, whose time, insights, and openness were invaluable to the
empirical phase of this research.</p>
      <p>Declaration on Generative AI
During the preparation of this work, the authors used ChatGPT, for language editing, including
grammar and spelling correction and minor rephrasing. The authors reviewed and edited the content
and take full responsibility for the publication’s content.
id digital transformation solution.," Journal of environmental and development studies, vol. 4, no.
2, pp. 028-037, 2023.
[18] S. Na, S. Heo, W. Choi, C. Kim and S. W. &amp; Whang, "Artificial intelligence (AI)-Based technology
adoption in the construction industry: a cross national perspective using the technology
acceptance model.," Buildings, vol. 13, no. 10, 2023.
[19] F. Bademosi and R. R. &amp; Issa, "Factors influencing adoption and integration of construction
robotics and automation technology in the US.," Journal of construction engineering and
management 04021075, vol. 147, no. 8, 2021.
[20] V. Braun and V. Clarke, "Using thematic analysis in psychology.," Qualitative research in
psychology, vol. 3, no. 2, pp. 77-101, 2006.
[21] C. Wohlin, P. Runeson, M. Höst, M. C. Ohlsson, B. Regnell and A. Wesslén, Experimentation in
software engineering: An introduction., Kluwer Academic Publishers., 2000.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Wagner</surname>
          </string-name>
          and
          <string-name>
            <given-names>E. F.</given-names>
            <surname>Monk</surname>
          </string-name>
          ,
          <article-title>Concepts in enterprise resource planning</article-title>
          ., South-Western.,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>She</surname>
          </string-name>
          et al.,
          <article-title>"FinRobot: Generative Business Process AI Agents for Enterprise Resource Planning in Finance</article-title>
          .,
          <source>" arXiv preprint arXiv:2506.01423</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Kotha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Tokachichu</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Padakanti</surname>
          </string-name>
          ,
          <article-title>"Synergizing AI and ERP for Predictive Supply Chain Management and Quality Assurance in Healthcare.,"</article-title>
          <source>International Journal for Multidisciplinary Research (IJFMR)</source>
          , vol.
          <volume>6</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Yathiraju</surname>
          </string-name>
          ,
          <article-title>"Investigating the use of an artificial intelligence model in an ERP cloud-based system</article-title>
          .,
          <article-title>"</article-title>
          <source>International Journal of Electrical, Electronics and Computers</source>
          , vol.
          <volume>7</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>26</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Balasubramaniam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kauppinen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rannisto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hiekkanen</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Kujala</surname>
          </string-name>
          ,
          <article-title>"Transparency and explainability of AI systems: From ethical guidelines to requirements</article-title>
          .,
          <source>" Information and Software Technology</source>
          ,
          <volume>159</volume>
          ,
          <fpage>107197</fpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <article-title>"Perceived usefulness, perceived ease of use, and user acceptance of information technology</article-title>
          .,
          <source>" MIS Quarterly</source>
          , vol.
          <volume>13</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>319</fpage>
          -
          <lpage>340</lpage>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Mlekus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bentler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Paruzel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Kato-Beiderwieden</surname>
          </string-name>
          and
          <string-name>
            <given-names>G. W.</given-names>
            <surname>Maier</surname>
          </string-name>
          ,
          <article-title>"How to raise technology acceptance: user experience characteristics as technology-inherent determinants</article-title>
          .,
          <source>" Gruppe. Interaktion. Organisation. Zeitschrift für Angewandte Organisationspsychologie (GIO)</source>
          , vol.
          <volume>51</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>273</fpage>
          -
          <lpage>283</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gefen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Karahanna</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. W.</given-names>
            <surname>Straub</surname>
          </string-name>
          ,
          <article-title>"Trust and TAM in online shopping: An integrated model</article-title>
          .,
          <source>" MIS Quarterly</source>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>90</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>H.</given-names>
            <surname>Choung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>David</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <article-title>"Trust in AI and its role in the acceptance of AI technologies.,"</article-title>
          <source>International Journal of Human-Computer Interaction</source>
          , vol.
          <volume>39</volume>
          , no.
          <issue>9</issue>
          , pp.
          <fpage>1727</fpage>
          -
          <lpage>1739</lpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kenesei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ásványi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kökény</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jászberényi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Miskolczi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Gyulavári</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Syahrivar</surname>
          </string-name>
          ,
          <article-title>"Trust and perceived risk: How different manifestations affect the adoption of autonomous vehicles.," Transportation research part A: policy and practice</article-title>
          , vol.
          <volume>164</volume>
          , pp.
          <fpage>379</fpage>
          -
          <lpage>393</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kenesei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kökény</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ásványi</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Jászberényi</surname>
          </string-name>
          ,
          <article-title>"The central role of trust and perceived risk in the acceptance of autonomous vehicles in an integrated UTAUT model</article-title>
          .,
          <source>" European Transport Research Review</source>
          , vol.
          <volume>17</volume>
          , no.
          <issue>1</issue>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Mhaskey</surname>
          </string-name>
          ,
          <article-title>"Integration of artificial intelligence (AI) in enterprise resource planning (ERP) systems: Opportunities, challenges, and implications</article-title>
          .," International Journal of Computer Engineering in Research Trends, vol.
          <volume>11</volume>
          , no.
          <issue>12</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Samara</surname>
          </string-name>
          ,
          <article-title>"AI-driven SAP S4/HANA, advancing firm operational efficiency, decision-making and resource optimization.,"</article-title>
          <source>International Journal of Innovative Research and Scientific Studies</source>
          , vol.
          <volume>8</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>4795</fpage>
          -
          <lpage>4811</lpage>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wanner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. V.</given-names>
            <surname>Herm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Heinrich</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Janiesch</surname>
          </string-name>
          ,
          <article-title>"The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study</article-title>
          .,
          <source>" Electronic Markets</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>2079</fpage>
          -
          <lpage>2102</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V. S. K.</given-names>
            <surname>Settibathini</surname>
          </string-name>
          ,
          <article-title>"Future of ERP: AI-Driven Transformation for Business Success</article-title>
          .,
          <article-title>"</article-title>
          <source>International Journal of Professional Studies</source>
          , vol.
          <volume>19</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>212</fpage>
          -
          <lpage>225</lpage>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P.</given-names>
            <surname>Esmaeilzadeh</surname>
          </string-name>
          ,
          <article-title>"The impacts of the perceived transparency of privacy policies and trust in providers for building trust in health information exchange: empirical study.," JMIR medical informatics</article-title>
          , vol.
          <volume>7</volume>
          , no.
          <issue>4</issue>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>W. A.</given-names>
            <surname>Harsanto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Matondang</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Wibowo</surname>
          </string-name>
          ,
          <article-title>"The use of technology acceptance model (TAM) to analyze consumer acceptance towards e-commerce websites. A case of the Plantage</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>