<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Journal of Information
Management 55 (2020) 102210. https://doi.org/10.1016/j.ijinfomgt.2020.102210.
[38] A. Cebulla</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>From Technostressors to AI-Stressors: A Systematic Liter- ature Review of Stressors Associated with AI Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pratik Sapkota</string-name>
          <email>pratik.sapkota@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markus Makkonen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Henri Pirkkalainen</string-name>
          <email>henri.pirkkalainen@tuni.fi</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Markus Salo</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Tampere University, Faculty of Management and Business, Information and Knowledge Management Unit</institution>
          ,
          <addr-line>PO Box 553, FI-33014 Tampere</addr-line>
          ,
          <country country="FI">Finland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Jyvaskyla, Faculty of Information Technology</institution>
          ,
          <addr-line>PO Box 35, FI-40014 Jyvaskyla</addr-line>
          ,
          <country country="FI">Finland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>18</volume>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This systematic review explores how the factors associated with artificial intelligence (AI) systems induce stress in workplace contexts. Following a pre-specified protocol and the PRISMA 2020 guidelines, we searched Scopus on 7 March 2025 for peer-reviewed journal articles written in English (no date limits) that empirically or conceptually link AI use to negative stress at work. Studies in which AI was investigated solely as a stress-relieving tool were excluded. Screening 1,333 records yielded 66 eligible articles (40 quantitative, 9 qualitative, 6 mixed-methods, and 11 conceptual) spanning healthcare, hospitality, manufacturing, transport, and other sectors. Although no formal risk-of-bias tools were applied, evidence strength was noted descriptively. Data were charted on context, AI type, methods, and stress findings and then narratively synthesized. Across the reviewed studies, AI use amplifies the six established technostressors: techno-overload, techno-invasion, techno-complexity, techno-insecurity, technouncertainty, and techno-unreliability. In addition, it introduces five emerging AI-stressors (i.e., stressors unique to AI, distinct from the established technostressors): techno-unpredictability, loss of autonomy, ethical and moral conflict, social erosion, and career disruption. These findings indicate that the established technostressors are inadequate to capture the distinctive characteristics of contemporary AI systems. Although most evidence is drawn from cross-sectional designs and focuses on negative outcomes, the review highlights an urgent need for more nuanced and responsible approaches to AI utilization in organizational contexts.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>Technostress</kwd>
        <kwd>AI-Stressors</kwd>
        <kwd>Technostressors</kwd>
        <kwd>AI-induced Stress</kwd>
        <kwd>Workplace</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The rapid diffusion of artificial intelligence (AI) technologies is reshaping both organizational
workflows and employee experiences. According to a 2025 McKinsey global survey, 78 percent of
firms have already deployed AI in at least one business function, and 71 percent use generative AI
tools on a regular basis [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These systems, which range from machine learning algorithms to fully
autonomous decision engines, have the potential to raise productivity and even create new job roles
by automating routine tasks [
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ]. However, despite promising considerable benefits, concerns
related to worker well-being are also increasing. Recent studies suggest that AI implementation often
leads to increased workloads and rising skill demands [
        <xref ref-type="bibr" rid="ref4 ref5">4,5</xref>
        ], negatively affecting employees’ moods
and well-being [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Likewise, a recent study in healthcare [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] shows that clinicians feel additional
pressure when AI-driven diagnostic systems give unexpected results, suggesting that the technology
itself can trigger job-related stress.
      </p>
      <p>
        Stress is defined as a “relationship between the person and the environment that is appraised by
the person as taxing or exceeding his or her resources and endangering well-being” [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. According
to Tarafdar et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], technostress, in turn, refers specifically to “a situation of stress that an
Macedonia.
individual experiences due to his or her use of information technology (IT)” [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The concept of
technostress was first introduced by Brod[9] as “a modern disease of adaptation resulting from an
inability to cope with new computer technologies in a healthy manner.” Building on this foundation,
in organizational research, Tarafdar et al. [10] and Ragu-Nathan et al. [11] delineate five interrelated
technostress creators: techno-overload, situations where information and communication
technologies (ICTs) force users to work faster and longer; techno-invasion, where ICTs create a
situation in which users can potentially be reached anytime and feel the need to be constantly
connected; techno-complexity, where users feel that their skills are inadequate and that they are
forced to spend time and effort in learning and understanding new technologies; techno-insecurity,
where users feel threatened about losing their jobs to others who have a better understanding of new
ICTs; and techno-uncertainty, where constant changes and upgrades to ICTs create uncertainty and
a sense of insecurity about how to use the new applications. Additionally, a recent study [12] has
also established techno-unreliability as another critical technostress creator, characterized by system
errors, unpredictable freezes, crashes, and intermittent availability.
      </p>
      <p>When employees face these technostress creators or technostressors, this typically results in
various adverse consequences for them, which are also referred to as “strains” in the information
systems (IS) literature [13]. Job-level strains include reduced job satisfaction, diminished
organizational commitment, heightened role overload and conflict, and stronger turnover intentions
[14–16]. IS-use-related strains span weaker innovation and productivity, lower end-user satisfaction,
and even resigned or non-compliant system use [13,16–18]. Regarding personal well-being-related
strains, technostress has been consistently linked to anxiety, exhaustion, and burnout [13,14,19,20].</p>
      <p>
        However, the established technostressors do not yet sufficiently explain several stress phenomena
that appear unique to AI systems. AI systems differ from traditional IT studied in the IS literature
because many components are autonomous, often opaque, and self-adapting, and they can learn,
update decision rules, and in some settings act without continuous human supervision [
        <xref ref-type="bibr" rid="ref2">2,21</xref>
        ]. In
high-speed rail operations, Chen et al. [22] observed that human drivers might be required to
abruptly reclaim control from AI during emergencies, creating extreme demands on cognitive
resources. This is particularly challenging when system behavior is opaque and difficult to interpret.
This stressor, rooted in AI’s unpredictability and partial human control, does not neatly align with
the established categorizations of technostress creators. The issue is not confined to sudden
emergencies either. Tam et al. [23] argue that although AI typically automates routine or repetitive
tasks, humans consequently handle more complex, unpredictable, and emotionally challenging
exceptions. These are the tricky scenarios that AI cannot manage. Far from simplifying work overall,
this arrangement can heighten stress, particularly as employees are expected to intervene only when
something goes wrong. Thus, an important theoretical question arises: how do the unique
characteristics of AI contribute to new forms of technostress in the workplace?
      </p>
      <p>
        Furthermore, current findings on how AI systems contribute to psychological strain and
workplace stress are fragmented across diverse sectors, including healthcare [
        <xref ref-type="bibr" rid="ref5">5,24–26</xref>
        ], transport
[22], manufacturing [
        <xref ref-type="bibr" rid="ref2">2,27</xref>
        ], and education [28,29]. This fragmentation underscores the need for a
systematic review that consolidates cross-sector insights and examines the factors associated with
AI systems that induce stress in the workplace. Such issue is timely as well, given increasing evidence
that AI might both amplify existing stressors as well as introduce novel ones [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Reports from the
International Labour Organization (ILO) [30], for instance, highlight how AI contributes to job
insecurity, intensified workloads, and diminished autonomy through continuous monitoring.
      </p>
      <p>Thus, to address these theoretical and empirical gaps, this review introduces and systematically
studies the concept of “AI-stressors,” which refers to stressors uniquely associated with AI systems.
Against this backdrop, the review addresses two central research questions: RQ1: What kinds of
emerging AI-stressors have been identified in prior literature? RQ2: How do these
AIstressors differ from the established technostressors previously associated with traditional
IT? This is done by systematically reviewing and synthesizing evidence from 66 peer-reviewed
journal articles, drawing on the PRISMA guidelines [31] and the structured review process outlined
by Okoli [32].</p>
      <p>The contribution of this systematic literature review is both theoretical and practical.
Theoretically, it helps uncover what type of AI-stressors can be identified in modern workplaces and,
thus, expands the existing research on technostress. Practically, identifying and categorizing these
stressors enables organizations to design better AI systems, organizational support structures, and
managerial policy interventions to safeguard employee well-being.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methods</title>
      <sec id="sec-2-1">
        <title>2.1. Protocol and Eligibility Criteria</title>
        <p>We conducted a standalone systematic literature review following the eight-step methodology
outlined by Okoli [32], which emphasizes a rigorous approach to planning, searching, screening,
extracting, and synthesizing literature. In line with steps 1 and 2 (defining the review’s purpose and
drafting a protocol), we first clarified our objective to identify and categorize stressors associated
with AI systems in workplace contexts and established a structured review protocol to ensure
consistency across all stages of the review. The protocol included predefined procedures for screening,
extraction, and data handling, and was pilot tested on a subset of studies to refine our approach. At
the outset, we defined inclusion and exclusion criteria to ensure that only relevant studies
addressing AI-induced stress were included. Specifically, we included peer-reviewed journal articles (1)
published in the English language and at the final publication stage, (2) that explicitly discussed
negative stress (distress) induced (or partly induced) by AI in workplace or organizational contexts.
This also included studies that reported both positive and negative impacts of AI, as long as the
negative, stress-related aspects were analyzed in sufficient depth, and (3) that reported or theorized
a direct link between some aspect of AI and the experience of stress (or its outcomes like strain,
anxiety, or burnout) among workers. We excluded studies that referred to “stress” only in general
terms or mentioned it briefly without establishing a clear connection to AI. Likewise, studies that
framed AI solely as a tool for mitigating or managing stress were excluded.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Information Sources and Search Strategy</title>
        <p>A comprehensive search was performed in Scopus, chosen for its broad coverage of interdisciplinary
scholarly literature, on March 7, 2025. The search string was developed iteratively, combining terms
for stress, AI, and workplace contexts, and was finalized as:TITLE-ABS-KEY (stress* OR *stress) AND
TITLE-ABS-KEY ("AI" OR "Artificial Intelligence") AND TITLE-ABS-KEY (work* OR organization*)
AND ( LIMIT-TO ( PUBSTAGE,"final" ) ) AND ( LIMIT-TO ( DOCTYPE,"ar" ) ) AND ( LIMIT-TO
( LANGUAGE,"English" ) ). This query was applied to titles, abstracts, and keywords, ensuring we
captured literature explicitly referencing “stress” in relation to AI and work. We used wildcard
characters to include common variations of key terms, for example, “stress* or *stress” was used to
retrieve results that mention keywords like “stress,” “stressors,” “technostress,” or “distress,” while
“work*” captured both “work” and “workplace,” and “organization*” picked up terms like “organi
zation” and “organizational.” We included the filters for document type (articles), publication stage
(final articles only, not early access or in-press drafts), and language (English) directly in the query
to focus the results. We deliberately chose not to apply subject area or date filters so we wouldn’t
exclude relevant work if relevant from adjacent fields. Since AI in the workplace is studied across
multiple disciplines, narrowing the search by subject area would have risked missing important
contributions. No other databases were searched (per the scope of this review), and no manual search
of references was conducted, so the Scopus results represent the sole identification source. The
search yielded 1333 unique records in total after the removal of two duplicates from the database.
The complete query and search results date are reported here to ensure reproducibility. This corre
sponds to Okoli’s [32] step 4 (search for literature), which emphasizes transparency and
reproducibility in search execution.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Selection Process</title>
        <p>All 1333 retrieved records were screened in a two-stage process. At the first stage, titles and abstracts
were reviewed against the inclusion criteria by the first author. At this stage, the author excluded
publications that did not fit the topic (e.g., those where “stress” referred to mechanical stress/strain
or where “AI” referred to something other than artificial intelligence). 1210 records were excluded at
this first stage, leaving 123 journal articles. In the second stage, full texts of the remaining journal
articles were obtained and assessed thoroughly. Given the emerging nature of the topic, we included
both empirical studies and conceptual/commentary studies that explicitly discussed AI as an inducer
of workplace stress. Studies that mentioned stress in general terms without associating it with AI or
that examined technology-induced stress in the workplace but did not mention the role of AI as its
inducer were excluded at this stage. For example, one excluded study [33] discussed technostress in
a company that develops AI but did not examine the AI systems themselves as stressors. Similarly,
another study [34] was excluded because it surveyed future job-seekers’ hypothetical attitudes
toward emotional AI in the workplace without investigating actual experiences of stress among
current employees induced by AI systems. Other excluded studies framed AI solely as a tool to
mitigate workplace stress rather than examining if AI could induce stress. At this stage, 57 articles
were excluded, most commonly because they did not focus on AI as an inducer of stress, leaving 66
articles. No disagreements arose in the screening process among the authors as the criteria were
clear-cut. Figure 1 presents the PRISMA flow diagram of the screening process. The screening process
corresponds to Okoli’s [32] step 3 (apply practical screen), which recommends defining clear
inclusion/exclusion boundaries and documenting them systematically before proceeding to quality
appraisal or synthesis.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Data Extraction</title>
        <p>We developed a data extraction form (initialized in a spreadsheet) to systematically capture relevant
information from each included study. The form was pilot tested on a few studies and refined. For
each study, we extracted bibliographic details (authors, year, title, and journal), study characteristics
(sector or context, AI technology examined, methodology, and sample size, if empirical), and most
importantly, the findings related to AI and stress. In particular, we took notes of any AI-stressors
mentioned (e.g., “job insecurity due to AI”), how these were defined or measured, and any
theoretical frameworks used (e.g., technostress creator categories or stress appraisal models). We also
recorded whether the study discussed coping strategies or interventions and whether it compared
the emerging AI-stressors to the established technostressors to address the question of overlap or
novelty. To ensure consistency, one author performed the primary extraction for all studies. The
extraction process corresponds to Okoli’s [32] steps 5 and 6 (study quality screening and data
extraction) and was conducted in an iterative manner alongside reading the literature.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Data Elements and Quality Appraisal</title>
        <p>The key data elements extracted for synthesis were the AI-stressors associated with AI systems– i.e.,
the specific factors related to AI systems that the authors identified as inducing stress for workers.
These could be technological features (like AI unpredictability), job factors (like changed roles due
to AI), or psychological perceptions (like the fear of replacement). We also noted stress outcomes
(e.g., anxiety and burnout) when relevant to ensure that we correctly interpreted the antecedents
versus consequences of stress. While we did not exclude studies based on methodological quality
(given the exploratory nature of this study, we wanted to include conceptual/commentary articles as
well), we did appraise the strength of evidence for the claims of each empirical study. Each empiri
cal study was categorized as providing strong, moderate, or weak evidence based on factors like
sample size and the rigor of study design. Conceptual/commentary studies were treated cautiously,
mainly to inform theory, not counted as empirical evidence. This approach aligns with Okoli’s [32]
recommendation to include diverse sources but remain aware of their quality. We did not perform a
formal risk of bias assessment for each study as one would in a clinical review because our outcomes
were not experimental effects. However, we acknowledge potential biases in the literature, such as
a publication bias favoring studies that report negative outcomes or problems associated with AI use.</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.6. Analysis and Synthesis Methods</title>
        <p>We employed a narrative qualitative synthesis, structured thematically, to integrate findings from
separate studies. Following Okoli’s [32] step 7 (analysis and synthesis), we used an inductive coding
approach in which all extracted stressors were listed and iteratively categorized into themes based
on conceptual similarity. For example, codes such as “algorithmic control,” “loss of control,” and “AI
decision override” were merged into a broader theme of “loss of autonomy” (emerging AI-stressor),
whereas descriptions matching the six established technostressors (techno-overload, invasion,
complexity, insecurity, uncertainty, unreliability) were retained under those headings. We compared
these emergent themes with the established technostressors [10–12] to identify points of alignment
or divergence, then organized the results around the complete set of stressor (emerging AI-Stressors
and established technostressors) associated with AI-systems. Within each theme, we aggregated
evidence from multiple studies, highlighting representative examples and noting frequency (i.e., how
many studies mentioned each stressor) to yield a sense of prevalence, although no quantitative
metaanalysis was performed as most studies were not providing commensurate effect sizes and their
outcomes were qualitative or varied. Therefore, no heterogeneity metrics or subgroup analyses were
needed. We also constructed summary tables to present the themes and sample references in a
concise manner (cf. Tables 1 and 2). The synthesis focused on common patterns and unique insights,
aiming to answer the research question comprehensively. We did not use a formal certainty appraisal
tool such as GRADE. Instead, we annotated our level of confidence in findings based on the
methodological transparency of each study and the clarity and relevance of its theoretical
contribution to the topic of AI-induced stress. Finally, Okoli’s [32] step 8 (writing the report) was completed
by compiling the findings according to PRISMA reporting standards.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>The results are organized in three subsections. The first subsection presents a descriptive overview
that classifies the 66 peer-reviewed studies by research method, sector, and AI system type. The
second subsection introduces the five emerging AI-stressors that go beyond those documented in
prior IS technostress research [10–12]. The third subsection discusses how AI amplifies the six
established technostressors [10–12] associated with traditional IT.</p>
      <sec id="sec-3-1">
        <title>3.1. Descriptive Overview</title>
        <p>Methodologically, out of the 66 peer-reviewed studies on stressors associated with AI systems, 40
studies (about 61%) employed quantitative methods (e.g., surveys or experiments). In contrast, 9
studies (about 14%) employed qualitative methods (e.g., interviews or case studies). A smaller subset
of 6 studies (about 9%) utilized mixed methods designs, combining quantitative and qualitative data.
Finally, 11 studies (about 17%) were conceptual or theoretical (non-empirical), including opinion
pieces, theoretical essays, and one systematic conceptual review that relied on bibliometrics, content,
and integrative literature analysis. This distribution shows a strong empirical emphasis, with
quantitative research dominating, while a notable minority of works provide qualitative insights or
conceptual perspectives.</p>
        <p>In terms of sectoral focus, cross-industry or generic workplace samples dominated, appearing in
31 studies (about 47%). Hospitality, tourism, and food-service environments were next, featured in
11 studies (about 17%), followed by healthcare settings such as hospitals, ICUs, and outpatient clinics
in another 11 studies (about 17%). Manufacturing and industrial-production contexts were addressed
in 4 studies (about 6%), while transport automation involving high-speed rail, metro systems, and
autonomous shipping appeared in 3 studies (about 5%). Finance-sector applications, including
roboadvisory tools and banking call centers, were covered in 2 studies (about 3%), and the same number
focused on education or academic workplaces. Two single-study cases on gig-platform logistics and
a non-financial customer-service call center accounted for the remaining 2 studies (about 3%).
Overall, the evidence base is weighted toward general and service-sector settings, leaving heavy
industry, large-scale logistics, and finance-specific workplaces comparatively under-examined.</p>
        <p>Regarding the AI system types covered in the 66 studies, customer-facing service AI (chatbots,
service robots, virtual assistants, kiosks) was the most frequent focus, appearing in 13 studies (about
20%). Algorithmic management and surveillance platforms, including HR analytics, scheduling
engines, and behavior-tracking tools, were examined in 11 studies (about 17%). The same number of
studies, 11 (about 17%), centered on clinical AI and decision-support technologies embedded in
electronic health records or diagnostic workflows. Generative AI systems such as ChatGPT-style
language models formed the main topic in 5 studies (about 8%). Industrial automation and
collaborative robotics were addressed in 4 studies (about 6%). Transport-specific AI covering
autonomous train and ship control systems appeared in 3 studies (about 5%). Finance-oriented AI,
including robo-advisers and trading or fraud-detection engines, featured in a single study (about 2%).
Finally, 18 studies (about 27%) discussed AI in general or under the Smart Technologies, AI, Robotics,
and Algorithms (STARA) umbrella without highlighting any single system type. This distribution
indicates a clear emphasis on service interfaces and workplace management tools while still
representing a range of clinical, industrial, transport, and financial applications.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Emerging AI-stressors</title>
        <p>Below we introduce the five emerging AI-stressors that surfaced inductively during coding and that
do not match the six established technostressors. Each represents a distinct job demand rooted in the
unique features of AI systems and appeared often enough across studies to justify its category. Table
1 lists the five emerging AI-stressors, provides concise definitions synthesized from the reviewed
literature, and cites the studies that support each stressor.</p>
        <sec id="sec-3-2-1">
          <title>Emerging AI-Stressors Supporting Studies</title>
          <p>
            1. Techno-Unpredictability [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ], [22], [23], [27], [35], [36], [37],
Situations where AI systems behave unpredictably or [38]
opaquely, producing outcomes that users cannot foresee
or explain.
2. Loss of Autonomy [22], [23], [25], [26], [29], [35], [36],
Situations where decision authority is ceded to AI [38], [39], [40], [41], [42], [43], [44],
algorithms or algorithmic management, reducing [45], [46], [47], [48], [49], [50], [51],
employees’ ability to influence or control their work. [52]
3. Ethical and Moral Conflict [23], [26], [28], [29], [35], [36], [37],
Situations where AI introduces ethical and moral [38], [40], [43], [44], [45], [46], [53],
conflicts that clash with personal or professional norms. [54], [55], [56], [57], [58], [59]
          </p>
        </sec>
        <sec id="sec-3-2-2">
          <title>4. Social Erosion</title>
          <p>Situations where AI alters or breaks down interpersonal [23], [26], [28], [29], [35], [36], [37],
dynamics by thinning communication, eroding trust, [38], [40], [43], [44], [45], [46], [53],
intensifying competition, or creating emotional [54], [55], [56], [57], [58], [59]
disconnection between workers, supervisors, or the AI
itself.
5. Career Disruption [50], [59], [60], [61]
Situations where employees anticipate that Smart
Technologies, AI, Robotics, and Algorithms (STARA)
will significantly alter or threaten their career trajectory
or job security.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.2.1. Techno-Unpredictability</title>
        <p>
          The theme of techno-unpredictability emerges in 8 studies (about 12%) as a novel AI-stressor that
arises when an AI system shifts course in real-time and workers cannot foresee or influence its next
action. Issa et al. [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] empirically validate this stressor, defining it as “the unpredictable behavior of
AI systems that creates stress and anxiety for users.” In a healthcare sample, they link the construct
to algorithmic opacity and decision ambiguity, showing that techno-unpredictability raises
technodistress. Röttgen et al. [35] extend the argument to algorithmic management, explaining that
rideshare drivers receive only the next pickup instruction while the platform silently rewrites subsequent
tasks, a design that “makes it impossible for the worker to oversee the complete details of a given
task” and therefore reduces predictability and transparency.
        </p>
        <p>Further evidence confirms the effect across industries.Cebulla et al. [36] interview AI specialists
and safety inspectors and document “shock events” and seasonal prediction failures that interrupt
production lines and create service discontinuities. In manufacturing, Karbouj et al. [27] show that
humans can predict fewer than half of a cobot’s adaptive moves, a mismatch that elevates
psychological stress and erodes trust. Tam et al. [23] note that real-world handovers between ship
crews and autonomous vessels are “likely to be unpredictable,” leaving operators unsure when
autonomy will cede control. Chen et al. [22] find a parallel risk on high-speed rail, where AI-driven
intelligentization injects random weather, passenger, and emergency scenarios that raise drivers’
mental workload and operational risk. Sinha et al. [37] report that self-learning shop-floor robots
create an unpredictable environment that heightens anxiety, especially among employees without
prior exposure to autonomy.</p>
        <p>Finally, early-stage adoption studies highlight a managerial blind spot. Cebulla et al. [38] observe
that many AI risks only surface post-rollout, noting that “the impact on users and developers of AI
may be difficult, if at all possible, to foresee.” Together, these studies show that when AI systems
update, adapt or fail in real-time, workers experience a distinctive form of stress characterized by
heightened vigilance, trust erosion, and the continual need to recalibrate action confirming
technounpredictability as a robust, cross-domain AI stressor.
3.2.2. Loss of Autonomy
22 studies (about 33%) in our review identify a shared mechanism by which AI erodes workers’ sense
of agency: once decision-making is ceded to opaque algorithms, employees feel they can no longer
shape, question, or even fully understand the forces that govern their jobs. This phenomenon, often
described as algorithmic control, shifts authority from human managers to AI systems that determine
task allocation, performance metrics, and disciplinary measures. Early commentary frames this shift
as a new “digital Taylorism,” noting that algorithmic dashboards now set targets, schedules, and even
discipline in ways employees “cannot negotiate or decline” [35,42]. Hotel staff torn between a
manager’s instructions and contradictory AI prompts report role conflict and anxiety [51]. Metro
and high-speed-rail drivers, relegated to passive monitoring while autonomous control handles
everyday operations, describe skill fade and the pressure of stepping in only during emergencies
[22,41]. Clinicians worry that machine recommendations will override professional judgment or
expose them to liability if they deviate [52], while laboratory staff fear that over-reliance on
automated analyzers will dull critical-thinking skills [25].</p>
        <p>Survey studies reinforce the pattern: STARA awareness or bossware oversight consistently
predicts lower perceived job autonomy, which in turn drives reduced engagement and higher
burnout [48,50]. Experimental vignette studies add nuance. When a decision-support system is
granted more autonomy, participants’ sense of control drops non-linearly while anxiety, frustration,
and technostress spike [39]. Large U.S. survey panels likewise find that occupations with higher
automation risk report significantly less autonomy together with higher stress and lower job
satisfaction [49].</p>
        <p>Mechanisms are varied—algorithmic routing that locks gig-workers to prescribed paths,
callcenter scripts that forbid deviation, constant screen-capture scoring of keystrokes or gaze, and even
generative AI templates that quietly shift creative discretion from humans to models [29,36,40,45].
Yet the psychological signature is the same: powerlessness, declining self-efficacy, and erosion of
professional identity. This makes the loss of autonomy via algorithmic control another consistently
documented pathway through which contemporary AI systems generate workplace stress.
3.2.3. Ethical and Moral Conflict
The theme of ethical and moral conflict emerges across 20 studies (about 30%) in our review as a
novel AI-stressor. It describes the psychological strain employees experience when algorithmic
decisions, data practices, or AI-enabled management routines clash with personal or professional
norms of fairness, privacy, autonomy, or duty of care. Evidence of this stressor cuts across sectors.
In service operations, Malik et al. [53] report frontline employees who fear that delegating decisions
to algorithms will reproduce existing social bias, a concern voiced explicitly in the statement that
organizational AI must not “replicate some of the bias that we have already in our society.” Cebulla
et al. [36] report that several interviewees felt genuine distress when workplace analytics tried to
infer highly sensitive attributes such as a worker’s pregnancy status from routine data. They judged
this practice a breach of privacy and dignity, which increases managerial control and undermines
the right to a safe and fair workplace. Munn [40] analyses “bossware” and explains how routine
compliance monitoring expands into pervasive surveillance, a shift employees describe as
harassment and a source of mistrust.</p>
        <p>In the healthcare sector, Wang et al. [62] find that practitioners already anticipate “potential
ethical breaches” when diagnostic AI is inaccurate, while Estrada et al. [58] quantify top concerns
among anesthesiologists, including algorithmic bias, incorrect recommendations, and patient harm.
Survey work by Irgang et al. [26] shows that clinicians struggle when regulatory frameworks compel
them to use AI even as professional judgment diverges from machine advice. Whitney et al. [52] draw
on participatory focus groups with maternity clinicians to show that many already experience
anticipatory moral distress: they expect a forthcoming machine-learning decision-support tool to
issue opaque risk scores that may not reflect social or contextual patient factors, and they fear that
either following or overriding those algorithmic recommendations could later be used against them
in malpractice reviews, ultimately undermining their professional duty to deliver individualized,
relationship-centered care.</p>
        <p>Likewise, Zhao et al. [54] capture hospitality workers’ fears of wage compression and skill
devaluation as generative AI lowers entry barriers, while Gao and Zamanpour [63] show that
financial engineers demand bias mitigation, transparency, and human oversight to maintain trust.
Wach et al. [45] highlight moral dilemmas that arise when users rely on ChatGPT despite the
possibility of fabricated answers, and Giray [28] links publish-or-perish pressure to unethical AI
shortcuts in academia. In robotics settings, Sinha et al. [37] record privacy and safety worries, and
Garcha et al. [55] demonstrate that sexist robot behaviors trigger measurable stress and disaffiliation
among female candidates. Cebulla et al. [38] point to a voice-and-autonomy dilemma, noting that
workers often hesitate to challenge erroneous machine outputs for fear of repercussions. Similarly,
conceptual syntheses by Arslan et al. [46] and Gupta et al. [64] reinforce that autonomous systems
introduce unresolved questions of accountability, bias, and legal liability. Collectively, these studies
position ethical and moral conflict as a robust, cross-domain AI-stressor precipitated by AI
deployment.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.2.4. Social Erosion</title>
        <p>Across 20 studies (about 30%) in our review, AI repeatedly emerges as a silent “third party” that
disrupts the social fabric of work. Whether it is a scheduling algorithm, a service robot, or a
generative AI co-writer, AI systems alter how and how often people interact, with three broad relational
consequences. First, AI-mediated work thins everyday communication. Qualitative evidence from
industry 4.0 engineers [53], call-center agents, and maritime crews [23,36] describes
“contextstripped” exchanges in which face-to-face problem-solving is replaced by screen prompts, scripted
dialogues, or remote video links, leaving staff feeling isolated and unheard. Survey research [26,51]
backs this up: frontline hotel workers who must juggle instructions from managers and chatbots
report lower social support and higher strain, while healthcare staff navigating AI tools score
significantly lower on peer-trust scales.</p>
        <p>Second, algorithmic oversight breeds mistrust. “Bossware” systems that log keystrokes or camera
time recast colleagues as potential risks to be managed, eroding solidarity and normalizing suspicion
[40]. Similar patterns surface in fast-food restaurants and Turkish hotels, where higher STARA
awareness predicts cynicism toward both co-workers and the employer [56,57]. Physician surveys
with a large number of participants extend the point: almost one-fifth of anesthesiologists cite a “lack
of trust among colleagues” as a barrier to adopting AI decision aids, signaling a relational cost even
in high-skill teams [58].</p>
        <p>Third, AI introduces new fault lines, including competition, status threats, and biased behavior,
all of which place additional strain on the workplace. Anthropomorphic service robots can evoke
wariness and reduce knowledge sharing, as employees fear being outperformed [45,59].
Experimental work shows that a robot’s sexist remarks trigger measurable stress and disaffiliation,
illustrating how algorithmic bias can damage social rapport as surely as a prejudiced co-worker [55].
Mixed-methods studies of robotics deployments likewise document anxiety over skill
marginalization and perceived injustice, emotions that ripple through team relations [37]. Together,
these studies illustrate how AI systems, by thinning everyday interactions, normalizing surveillance,
and introducing new sources of social friction, contribute to a distinctive form of AI-stressor marked
by the erosion of communication, trust, and collegial bonds.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.2.5. Career Disruption</title>
        <p>Across 4 studies (about 6%) [50,59–61] that measured employees’ STARA awareness provide
empirical evidence that AI-driven technologies disrupt long-term career trajectories, confirming
career disruption as an emerging AI-stressor. Career disruption and techno-insecurity are both forms
of technology-driven employment anxiety, but they emphasize different aspects of work life.
Technoinsecurity captures the fear that one’s current position could be eliminated or downgraded when
superior systems or more tech-savvy colleagues take over critical tasks. Career disruption, instead,
focuses on how Smart Technology, Artificial Intelligence, Robotics, and Algorithms (often grouped
as STARA) can alter the trajectory of an employee’s whole career: eroding advancement prospects,
undermining the value of accumulated expertise, and limiting long-term job autonomy. Quantitative
evidence confirms career disruption (operationalized in prior studies as “STARA awareness”) as a
distinct workplace stressor linked to AI adoption. Zhao et al. [61] found that intensified feelings of
career disruption heighten perceptions of psychological contract breach, which subsequently drive
organizational deviance.</p>
        <p>Similarly, Hur and Shin [50] demonstrated that higher awareness lowers job autonomy and
consequently suppresses proactive service performance. Yang et al. [59] found that anthropomorphic
service robots boost STARA awareness, which then diminishes service behavior through reduced
warmth perceptions. Finally, Yang and Jiang [60] reported that greater awareness of
technologydriven career uncertainty prompts employees to engage in job-crafting efforts aimed at protecting
their career prospects. Taken together, these studies verify career disruption as an emerging
AIstressor that leads to adverse attitudinal and behavioral outcomes.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.3. AI-Amplified Established Technostressors</title>
        <p>The six categories below include techno-overload, techno-invasion, techno-complexity,
techno-insecurity, techno-uncertainty, and techno-unreliability. These are the established technostressors
validated in prior IS research on traditional IT [10–12]. Our review confirms that they continue to
appear in association with AI systems, yet the underlying mechanisms differ. Table 2 lists the six
amplified established technostressors, provides concise definitions synthesized from the reviewed
literature, and cites the studies that support each stressor.</p>
        <sec id="sec-3-6-1">
          <title>Established Technotressors Supporting Studies</title>
          <p>
            6. Techno-Overload [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ], [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ], [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ], [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ], [22], [23], [24], [26], [28],
Situations where AI systems force employees to [29], [35], [36], [38], [40], [42], [43], [44],
handle more tasks, at a faster speed, and under [45], [46], [47], [48], [49], [53], [54], [56],
tighter deadlines than their capacity allows. [60], [63], [64], [65], [66], [67], [68], [69],
[70], [71], [72], [73], [74], [75]
7. Techno-Invasion [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ], [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ], [23], [26], [29], [36], [37], [38],
Situations where AI tools continuously monitor or [40], [42], [45], [47], [48], [49], [53], [57],
connect employees, eroding privacy and blurring [63], [64], [68], [74], [76]
the boundary between work and personal life.
10. Techno-Uncertainty [
            <xref ref-type="bibr" rid="ref2">2</xref>
            ], [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ], [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ], [23], [24], [25], [26], [27],
Situations where opaque, self-updating AI systems [29], [36], [37], [38], [39], [40], [41], [42],
repeatedly shift tools, rules, and job boundaries, [44], [45], [46], [47], [48], [51], [52], [53],
leaving employees unable to anticipate how the [54], [58], [64], [56], [57], [60], [63], [65],
system will act, what new skills will be required, or [66], [67], [68], [69], [70], [71], [72], [73],
whether current tasks and roles will endure. [75], [76], [77], [78], [79], [80], [82], [83],
[85]
11. Techno-Unreliability [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ], [23], [25], [28], [29], [36], [38], [42],
Situations where AI systems are unreliable or [43], [45], [52], [58], [66], [74]
error-prone.
          </p>
        </sec>
      </sec>
      <sec id="sec-3-7">
        <title>3.3.1. Techno-Overload</title>
        <p>
          Tarafdar et al. [10] and Ragu-Nathan et al. [11] described techno-overload as a situation where ICTs
force users to work faster and longer. Within the AI-stress literature, this stressor appears when AI
systems force employees to handle more tasks, at a faster speed, and under tighter deadlines than
their capacity allows. Across the 39 studies (about 59%), the evidence points to a consistent finding:
rather than easing workloads, AI systems tend to amplify them. For instance, electronic surveys and
time-lagged panel data show that algorithmic scheduling and performance dashboards compel
workers to complete tasks faster and against tighter deadlines [
          <xref ref-type="bibr" rid="ref4 ref5">4,5</xref>
          ], while qualitative data from
healthcare, finance, and retail reveal that “always-on” expectations spill labor into unpaid hours and
rest periods [53,63]. Rather than relieving drudgery, AI broadens the task set, such as extra data fields
to verify and more cases routed for triage [29,69,71], producing what Irgang et al. [26] call a paradox
of performance: higher nominal efficiency, yet an expanding backlog of work.
        </p>
        <p>The cognitive burden is equally stark. Black-box decision-support systems inundate users with
probabilistic outputs that must be interpreted under time pressure, stretching working memory and
decision bandwidth [24,43]. In high-risk contexts the strain escalates abruptly: when intelligent rail
automation fails, drivers must reassume manual control within seconds, triggering an instantaneous
spike in mental workload that exceeds their available cognitive resources [22]. Parallel evidence from
algorithmic management shows that granular keystroke surveillance erodes autonomy and forces
workers into a perpetual “performance treadmill,” a dynamic conceptualized as “overtaxing
regulation” [35] and empirically linked to job-stress scores in hospitality and logistics settings [65].</p>
        <p>
          Similarly, compounding these pressures is a persistent learning anxiety driven by the rapid pace
of AI evolution. Managers, clinicians, and researchers report a continuous skills gap, where prior
knowledge is frequently rendered obsolete by the next model update [28,66]. Structural equation
models using STARA awareness confirm that this perceived obsolescence fuels turnover intentions
and psychological strain [56,60]. Quantitative analyses further reveal a tipping-point dynamic:
techno-overload’s impact on distress intensifies rapidly after a certain threshold[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], suggesting that
modest automation may be tolerable, but scaling AI without guardrails yields disproportionate harm.
        </p>
        <p>
          Finally, a cross-cutting thread of role insecurity permeates the reviewed studies. Whether framed
as “digital Taylorism” [42] or “fear of AI” in tight labor markets [67], workers expend additional
effort simply to remain employable, thereby self-intensifying their overload. Longitudinal evidence
indicates that any short-term motivational lift from challenge-appraised overload [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] quickly flips to
hindrance, manifesting as burnout and reduced engagement [44].
        </p>
      </sec>
      <sec id="sec-3-8">
        <title>3.3.2. Techno-Invasion</title>
        <p>
          Tarafdar et al. [10] and Ragu-Nathan et al. [11] described techno-invasion as the pressure created by
ICTs that allow users to be reached at any time, fostering a sense of constant connectivity and
blurring work-life boundaries. In our review, 21 studies (about 32%) show that AI systems amplify this
stressor through three interrelated mechanisms: Firstly, constant connectivity blurs temporal and
spatial boundaries. Employees report spending less family time, staying online during holidays, and
feeling obliged to master new AI updates after working hours [
          <xref ref-type="bibr" rid="ref4 ref5">4,5,53,68,76</xref>
          ]. Hotel, financial services,
and maritime studies all find that cloud-based AI or remote autonomy creates an “always-on” culture
in which staff are permanently reachable [23,63]. Quantitative models show that this boundary
erosion predicts work-family conflict, lower engagement, and higher techno-distress[
          <xref ref-type="bibr" rid="ref5">5,57</xref>
          ].
        </p>
        <p>Secondly, AI-enabled surveillance presses work deep into personal domains. Commentary and
survey work detail video analytics, geolocation tracking, and algorithmic productivity scores that
operate continuously, often without disclosure [42,49]. “Bossware” extends this monitoring to home
offices, normalizing data capture workers cannot escape [40]. Qualitative research lists wearable
sensors, IoT devices, and predictive models that follow employees’ health data or pregnancy status,
a “boundary creep” that compromises privacy [36]. Similar concerns arise with virtual personal
assistants that make spoken commands audible to customers or colleagues, forcing workers to weigh
convenience against discretion [74]. Mixed-methods evidence on shop-floor robotics links privacy
fears to technophobia and resistance [37].</p>
        <p>Thirdly, professional roles and autonomy are pulled into algorithmic orbit. Physicians feel obliged
to answer patient queries tied to AI-generated health data outside clinic hours [69], while healthcare
workers describe fluctuating power and control as they juggle human and algorithmic decisions
indicating that AI disrupts personal control boundaries and invades their professional autonomy
[26]. Generative AI users report checking ChatGPT after work to keep their skills current [45], and
educators in Ed-tech start-ups say the same tools redefine expected communication standards,
pressuring them to respond around the clock [29]. Review studies argue that the spread of predictive
analytics makes “continuous online presence” the new default condition [64], and critical analyses
of surveillance capitalism show how emotional and biometric data are harvested without consent
[47]. Across quantitative, qualitative, and conceptual work, these findings converge: AI systems
extend tasks into evenings and weekends, enable pervasive monitoring, and recast professional
autonomy, together producing a robust pattern of techno-invasion-related stress [48].</p>
      </sec>
      <sec id="sec-3-9">
        <title>3.3.3. Techno-Complexity</title>
        <p>
          According to Tarafdar et al. [10] and Ragu-Nathan et al. [11], techno-complexity arises when users
feel their skills are inadequate and must invest effort in learning and understanding new ICTs. In our
review, 34 studies (about 52%) indicate that AI systems amplify this established stressor when the
opaque, self-modifying logic, probabilistic outputs, or frequent upgrades of AI surpass human
comprehension and available training. Self-learning algorithms evolve in real-time, undermining even
basic operational understanding and hazard management [42], while clinicians highlight similar
opacity as a leading stressor in daily practice [58]. Multi-wave surveys operationalize
TechnoComplexity through statements like "I do not know enough about this technology to handle my job
satisfactorily," demonstrating that unfamiliar AI technologies undermine self-efficacy [68] and
identifying "AI complexity" as sufficient to significantly increase workload[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>
          Interviews describe employees working overtime merely to decode cryptic applications [53], and
managers who, overwhelmed by inadequate data and unclear deployment guidelines, explicitly label
the technology "stressful because of its complex nature" [66]. Workforce studies document resistance
and dissatisfaction when complex systems are implemented without clear evaluations of their
benefits[36], categorizing perceived technological complexity as a hindrance stressor that obstructs
adoption [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. Controlled experiments reveal that decision-support AI systems managing multiple
task types lead to heightened mental effort and technostress [39]. Additionally, frontline surveys
connect the difficulty of mastering fast-food ordering AIs to diminished motivation and mental
health issues [57]. In healthcare, AI-enhanced Electronic Health Record (EHR) functions introduce
jargon-filled and non-standard interfaces that intimidate physicians[69]; pharmacogenomic alerts,
dense with genetic information, overwhelm clinicians [71]; blood-use calculators impose substantial
cognitive burdens on users lacking AI literacy [24]; and hospital staff experience anxiety when
required to navigate AI-driven complexity, ambiguity, and associated risks [26].
        </p>
        <p>Finance and engineering professionals emphasize that maintaining AI accuracy and reliability
necessitates continuous learning, escalating cognitive and emotional demands [63]. Quantitative
analyses further confirm that complex recommendation AIs leave employees feeling inadequate and
disengaged [70]. Laboratory technologists express frustration due to the lack of structured AI
training curricula [25]; maritime crews grapple with situational awareness challenges as autonomy
levels fluctuate[23]; and micro-business traders actively avoid virtual personal assistants perceived
as excessively complicated [74]. Manufacturing and hospitality sector surveys indicate that STARA
technologies blur role definitions and trigger turnover intentions when employees struggle to master
complex systems [56,60]. Conceptual analyses caution that generative AI interfaces, such as
ChatGPT, exacerbate complexity [29,45], relentless technological change rapidly renders skills
obsolete [46], AI-mediated video meetings exhaust cognitive resources [47], and AI deployments
generate unexpected psychosocial risks due to unpredictable impacts [38].</p>
        <p>
          Mixed-methods and longitudinal studies further link perceived AI complexity to burnout [48,72]
and stress-induced job crafting [77]. While some evidence suggests that complexity can occasionally
be reframed as a challenge that fosters eustress in supportive environments [
          <xref ref-type="bibr" rid="ref5">5,43,44</xref>
          ], the broader
pattern underscores its role as a persistent cognitive and emotional burden in AI-integrated
workplaces.
        </p>
      </sec>
      <sec id="sec-3-10">
        <title>3.3.4. Techno-Insecurity</title>
        <p>
          Techno-insecurity, as outlined by Tarafdar et al. [10] and Ragu-Nathan et al. [11], refers to user fears
about job loss due to technological replacement or others having superior tech skills. In our review,
46 studies (about 70%) report that AI intensifies this stressor by heightening worries about role
displacement, skill obsolescence, and long-term career stability. Multiple studies [56,60,85] confirmed
that AI awareness, defined as the perception of potential substitution by AI, triggers psychological
strain, lowers self-efficacy, and drives defensive or avoidant behaviors. Commentary study [42]
documented expectations of work intensification and displacement in robot-assisted settings. Survey
research operationalized the construct with items such as “I feel a constant threat to my job security
due to new technologies,” showing that AI awareness heightens insecurity, lowers self-efficacy, and
increases stress [
          <xref ref-type="bibr" rid="ref5">5,68</xref>
          ].
        </p>
        <p>Experience-sampling and time-lagged designs reveal that frontline hotel employees who
anticipate AI substitution report emotional strain, work-family conflict, and counter-productive
behavior [51,76], while laboratory experiments demonstrate that exposure to high-performing AI
assistants lowers self-esteem and intensifies job-loss concerns [78]. Qualitative interviews in
Industry 4.0 contexts describe persistent fears of redundancy and role ambiguity [53,66], and
thematic analyses in finance identify “pressure to keep up or be displaced” as a salient stressor[63].
Sector-specific studies report analogous patterns: clinicians anticipate reduced demand and income
as machine-learning systems enter practice [58,73], whereas laboratory personnel, maritime crews,
and hospitality staff express concern about the automation of core tasks [23,25,84].</p>
        <p>Large-scale surveys link STARA awareness to lower engagement, higher turnover intention, and
diminished well-being [56,60], and mixed-methods research associates insecurity with burnout,
depression, and disengagement in healthcare, marketing, and call-center settings [62,82,86].
Conceptual analyses attribute the phenomenon to rapid skill obsolescence, AI-enabled surveillance,
and pervasive lay-off narratives [40,45,46]. Together, these studies demonstrate that
technoinsecurity continues to surface in AI-enabled workplaces, reflected in perceived job threats, career
instability, automation anxiety, and fears of displacement.</p>
      </sec>
      <sec id="sec-3-11">
        <title>3.3.5. Techno-Uncertainty</title>
        <p>
          Tarafdar et al. [10] and Ragu-Nathan et al. [11] defined techno-uncertainty as the situation that
emerges from ongoing changes and updates in ICTs that require constant learning and adjustment.
In our review, 49 studies (about 74%) indicate that AI systems amplify this stressor when opaque,
self-updating AI systems repeatedly shift tools, rules, and job boundaries, leaving employees unable
to anticipate how the system will act, what new skills will be required, or whether current tasks and
roles will endure. Survey instruments document “frequent changes in software and hardware” [29]
and the sense that “there are always new developments” in AI tools [68]; rolling releases of
generative systems [45] and short life cycles of clinical add-ons [69] repeatedly reset performance baselines,
while insufficient training converts these updates into burnout triggers [25],[48,72], and quantitative
work links higher perceived volatility to greater technostress, exhaustion, and weaker adoption
intent [
          <xref ref-type="bibr" rid="ref3">3,68</xref>
          ].
        </p>
        <p>Uncertainty also intensifies when an autonomous system’s logic remains hidden: self-learning
algorithms with shifting rules impede risk control and heighten anxiety [42]; clinicians report stress
when alerts lack transparent evidence [71] or when unfamiliar genetic markers suddenly shape
recommendations [58]; experimental vignettes show that highly autonomous decision-support tools
elicit “feelings of uncertainty or ambiguity,” especially when task stages are invisible [39]; and
maritime crews fear that autonomous ships may not cope with novel situations, forcing risky
overrides [23], while qualitative analysts document “boundary creep,” in which AI gradually expands
beyond its remit [36].</p>
        <p>Parallel literature ties techno-uncertainty to substitution anxiety: daily-diary data reveal
emotional spikes whenever hotel employees believe AI could replace them [76]; time-lagged surveys
associate AI awareness with depression, anxiety, or turnover intentions among hospitality workers
[51,56,65], fast-food staff [57], and service employees [60]; longitudinal analysis traces similar
concerns across U.S. occupations as generative AI matures [79]; comparable narratives emerge from
metro drivers confronting automatic train operation [41], restaurant staff aware of STARA
technologies [80], and tour guides who fear “humans will struggle to compete” with AI [54].</p>
        <p>
          Organizational factors compound the stress: rapid roll-outs create paradoxical tensions in
healthcare [26] and foster knowledge-hiding in other sectors [82]; Industry-4.0 professionals report
multitasking, information overload, and chronic uncertainty [53]; AI-based surveillance systems
leave workers where employees are unsure about when, how, and to what extent they are being
surveilled, thereby destabilizing their sense of control and predictability at work [40]; clinicians
worry about post-hoc liability when deciding whether to follow opaque recommendations [52]; and
safety experts note that many AI risks surface only late in deployment, when corrective action is
costly [38]. Where measured quantitatively, techno-uncertainty correlates with technostress,
emotional exhaustion, burnout, reduced productivity, lower self-efficacy, and diminished AI
adoption intentions [48,68,70]. One study failed to find a statistical link yet still recorded the
perception of perpetual upgrades, labeling the phenomenon as “techno-unpredictability” [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-12">
        <title>3.3.6. Techno-Unreliability</title>
        <p>
          According to Weinert et al. [12], techno-unreliability describes a situation characterized by system
errors, unpredictable freezes, crashes, and intermittent availability. In the AI context, 14 studies
(about 21%) show that unreliability remains a serious concern. Qualitative interviews show that
faulty chatbots and data-poor models generate invalid marketing leads and extra rework, making
unreliability “a stress factor for marketing people” [66]. Commentary on cyber-physical systems
traces catastrophic consequences to a single sensor’s bad data feeding automated control loops, as in
the Maneuvering Characteristics Augmentation System (MCAS) software incident that overwhelmed
pilots [42]. Workplace safety research lists prediction failures, technical breakdowns, and security
breaches that jeopardize essential services and expose the current limits of AI [36]. Survey studies
treat crashes, malfunctions, and low availability as hindrance stressors that lower adoption intent
and elevate psychological load [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] and empirically link poor “perceived availability” to greater user
stress [43].
        </p>
        <p>Conceptual reviews of generative AI underline hallucinations, nonsensical or offensive content,
and low factual precision as core reliability deficits [45] and document scholarly concern over
fabricated citations that erode research integrity [28]. Clinical evidence echoes the theme: laboratory
professionals doubt algorithm validity in a qualitative interview [25], anesthesiologists flag
“incorrect decisions” and “system failures” that could harm patients [58], and clinicians foresee
lowsensitivity alerts and data-driven misclassifications leading to inappropriate care [52]. Maritime
analysts warn that autonomous ships may mis-react to novel situations and amplify cyber risks such
as misinformation or loss of control [23]. Small business owners report virtual assistants that
misinterpret commands, wasting customer time and jeopardizing sales [74]. Finally, a generative AI
case study notes employee anxiety when plausible-sounding answers lack evidence and threaten
professional credibility [29]. Collectively, these studies demonstrate that techno-unreliability
continues to surface in AI-enabled workplaces, driven by technical faults, hallucinations, data-quality
gaps, and availability lapses, all of which erode trust, increase corrective workload, and elevate
psychosocial risk.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>This review uncovers a dual pattern in how AI reshapes workplace stress. Beyond amplifying the
established technostressors, the literature on AI and stress introduces completely new emerging
AIstressors that do not neatly map to the traditional categories of technostress creators and arise from
AI’s unique characteristics. Figure 2 shows that the stressors associated with AI systems cluster into
two categories: five emerging AI-stressors (techno-unpredictability, loss of autonomy, ethical and
moral conflict, social erosion, and career disruption) that are unique to AI systems and six established
technostressors (techno-overload, techno-invasion, techno-complexity, techno-insecurity,
technouncertainty, and techno-unreliability) that are amplified in AI systems.</p>
      <p>
        The first emerging AI-stressor, techno-unpredictability, refers to situations in which AI systems
behave in unexpected or opaque ways, generating outcomes that users are unable to anticipate,
interpret, or explain [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. For example, Röttgen et al. [35] discuss algorithmic management in
platforms and note that AI frequently shuffles task assignments without warning. They explain that
the “constant integration of huge amounts of data by [algorithmic management] that is used to adapt
action plans and schedules in real-time makes it impossible for the worker to oversee the complete
details of a given task before starting”. In practice, a driver in a rideshare system “is only presented
with the very next potential passenger pickup address,” so work unfolds moment-to-moment rather
than as a predictable plan. This creates a unique stress: workers cannot anticipate future tasks or
plan effectively. In short, unlike traditional IT (which follows fixed protocols), AI’s real-time
adaptation and hidden logic make work highly unpredictable, inducing anxiety and frustration.
      </p>
      <p>The second AI-stressor is diminished autonomy due to algorithmic direction. Röttgen et al. [35]
highlight that advanced algorithmic management (AM) often “predefines routines so tightly that
humans can no longer self-direct their work”. They note that with AI management, “neither the task
assignment nor its scheduling is to be easily negotiated or declined,” and “more complete AM should
be related to lower job autonomy”. In other words, workers become cogs following AI instructions
(e.g. orders from a manager app or scheduling algorithm) rather than agents making choices. This
goes beyond established technostress mechanisms: here the system dictates not only what tasks are,
but often when, how, and in what order to do them. One study [40] describes this phenomenon as
“AI-enabled bossware,” where algorithmic supervision and tasking leave employees feeling
micromanaged by the machine. The net effect is a new stress, a feeling of being powerless or
controlled by the AI system, that is not captured by ordinary techno-complexity or insecurity.</p>
      <p>AI’s unique capabilities also spawn ethical and moral conflict at work. Several studies document
that employees may experience stress from the ethical implications of AI decisions. For instance,
Cebulla et al. [36] provide a clear example: in interviews with data scientists and regulators, they
find that AI can generate “resolutions affecting ethical, moral, and social principles,” such as
predicting sensitive health conditions or pregnancy from personal data. These capabilities can
contravene privacy and equity norms: as the authors note, “predicting health conditions/pregnancy
contravening privacy” is a concrete worry. They further caution that organizations must consider
whether “AI-driven organizational innovation” may undermine workers’ rights to a healthy and safe
workplace. In practical terms, employees can feel moral dissonance (and stress) when using AI: for
example, knowing an AI tool might reinforce biases, invade privacy, or make decisions with
life-anddeath impact. These ethical and moral conflicts, ranging from privacy violations to decisions that
conflict with professional judgment, have been confirmed in several studies as a distinct source of
stress in AI-integrated workplaces [26,36,54].</p>
      <p>Similarly, AI is also said to affect interpersonal dynamics at work. Malik et al. [53] find indirect
evidence: in their interviews, one respondent remarked that “AI intervention in [Industry 4.0]… has
changed the way of communication and it has brought invasion in personal life and digital
overdependence,” reducing “human-social interaction among the employees”. This suggests that as
AI mediates more tasks (e.g. chatbots handling customer queries, robots replacing assistants),
workers may feel socially isolated or worry about weakening team bonds. Other authors [36] discuss
how collaborative or supervisory AI (e.g. emotion-tracking bots or automated HR tools) can erode
trust and open communication. In sum, AI introduces social erosion such as reduced face-to-face
interaction, dehumanized communication, and interpersonal distrust, none of which are adequately
captured by established technostress dimensions.</p>
      <p>Finally, career disruption highlights a qualitatively distinct form of AI-induced stress that extends
beyond the scope of earlier technostress studies. Unlike general insecurity associated with digital
technologies, STARA awareness refers to anticipatory anxiety that smart technologies, artificial
intelligence, robotics, and algorithms may replace employees’ roles or undermine their long-term
career prospects [50,61]. Empirical studies confirm that such perceptions erode job autonomy and
reduce proactive behavior [50], foster psychological strain linked to fears of exclusion and
obsolescence [59,60], and reflect a growing awareness that even anthropomorphic machines can
challenge human uniqueness [59]. As AI systems increasingly take on complex, human-facing tasks,
these future-oriented stress responses demand more systematic attention.</p>
      <p>
        While these emerging AI-stressors reveal the emergence of fundamentally new sources of strain,
they do not replace the established technostressors. Instead, the literature shows that AI also
amplifies the existing stressors associated with traditional IT. AI intensifies the established
technostressors first identified by Tarafdar et al[.10] and Ragu-Nathan et al. [11]. Techno-overload,
for instance, rises when algorithmic dashboards accelerate task flow, pushing employees to work
faster and longer [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Techno-invasion intensifies when cloud-based AI services and their
round-theclock notifications keep employees perpetually reachable, blurring work-home boundaries and
extending surveillance into private life [68]. Techno-complexity intensifies when self-learning
algorithms evolve during operation and their opaque, black-box outputs leave employees feeling
unqualified to interpret results, pushing them into continual up-skilling[42]. Techno-insecurity, for
instance, rises when employees realize that AI could replace their roles, as radiology residents report
fewer future positions for human diagnosticians [73] and frontline hotel staff anticipates automated
service jobs [76]. Techno-uncertainty amplifies because rapid AI upgrades force continual relearning,
which lowers employees’ confidence in their ability to cope [68] and at the same time heightens
overall workplace stress [48]. Finally, techno-unreliability surfaces when generative AI hallucinates
or produces irrelevant text [45], chat-bots crash or mis-route leads because of low model stability
[66], and safety-critical sensors feed corrupted data into algorithmic control loops [42]; each
breakdown forces employees to re-validate results and accept responsibility for any downstream
mistakes.
      </p>
      <p>Overall, the reviewed studies show a twofold impact of AI on technostress. On one hand, AI
introduces emerging stressors reflecting its special properties: unpredictable behavior, tightly
algorithmic control over work, serious ethical and moral conflict, and altered workplace
relationships. On the other hand, AI tends to amplify the established technostressors by increasing
overload, invasion, complexity, insecurity, uncertainty, and unreliability through escalating
workload, intrusion, risk, and fear.</p>
      <sec id="sec-4-1">
        <title>4.1. Implications for Theory</title>
        <p>Theoretically, the review’s findings challenge and extend existing studies of workplace stress and
technostress in fundamental ways. Prior technostress research [10–12] documents six established
technostressors linked to traditional IT (techno-overload, techno-invasion, techno-complexity,
techno-insecurity, techno-uncertainty, techno-unreliability), but AI’s introduction adds qualitatively
new stressors that these studies did not originally account for. From the reviewed studies, we found
that AI not only amplifies the established technostressors but also generates emerging “AI-stressors”
such as techno-unpredictability, loss of control, ethical and moral conflict, social erosion, and career
disruption. For instance, techno-unpredictability is operationalized as “the phenomenon where the
unpredictable behavior of AI systems creates stress and anxiety for users”. This goes beyond
established techno-uncertainty by highlighting stress from algorithmic opacity and erratic AI
outcomes that employees cannot anticipate. Similarly, AI’s capacity to act autonomously can induce
a perceived loss of control or autonomy in workers. As Howard [42] explained, giving algorithms
power over work decisions without transparency erodes employees’ autonomy and can lead to “work
intensification, psychosocial stress, and a decline in worker well-being” – a dynamic not fully
accounted for in earlier studies on technostress.</p>
        <p>Moreover, the identification of ethical and moral conflict, i.e., distress arising when AI systems’
decisions or uses conflict with an employee’s moral values or fairness norms, pushes theory into new
territory at the intersection of technology and ethics. Prior technostress studies did not consider that
workers might experience stress from, say, an AI exhibiting bias or making ethically fraught
decisions. Yet studies [54] now document such scenarios: employees voice “fears of unjust labor
replacement, devaluation of human skills, and societal disruption” due to AI, which raises
fundamental ethical questions about fairness, dignity, and the future of work. This suggests that
workplace stress theories should be expanded to account for moral and value-based appraisals in
addition to traditional cognitive assessments of task demands.</p>
        <p>
          In terms of stress appraisal theory, many of these emerging AI-stressors are likely to be appraised
as hindrance demands (e.g., unpredictable failures, opaque algorithms, job insecurity) that threaten
well-being rather than as challenges. Indeed, researchers have already begun framing AI-related
issues like system breakdowns as hindrance stressors that provoke negative effects and lower
technology acceptance [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The Job Demands-Resources (JD-R) model offers a useful lens here: the
emerging AI-stressors represent additional job demands that can drain employees’ mental resources
and require new support. For instance, career disruption, the fear that one’s job could be replaced by
Smart Technology, AI, Robotics, and Algorithms (STARA), is essentially an AI-age extension of
techno-insecurity, reflecting “a unique perception of job uncertainty and insecurity in the digital
era.” Such heightened insecurity demand would in JD-R terms necessitate countervailing resources
(e.g., retraining opportunities, assurance of job security) to prevent strain.
        </p>
        <p>In summary, our review suggests that prevailing stress theories (technostress, JD-R, and cognitive
appraisal frameworks) must be revised and enriched to include emerging AI-stressors like
unpredictability, loss of control, ethical and moral conflict, social erosion, and career disruption.
These factors challenge the completeness of existing models and call for new theoretical
development on how employees appraise and cope with AI as a source of stress in the workplace.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Implications for Practice</title>
        <p>Practically, from a managerial and HR perspective, the findings carry urgent lessons for how
organizations should implement AI while safeguarding employees’ well-being. A key takeaway is
that firms cannot treat AI adoption as a purely technical upgrade; it is a socio-technical change that
requires proactive stress management. Recognizing these emerging AI-stressors equips
organizations and managers to better anticipate and mitigate the negative side effects of AI adoption.
Steps such as improving AI transparency and reliability, involving employees in AI implementation
decisions, providing AI training and clear ethical guidelines, and fostering open communication
might help mitigate these stressors. By managing the emerging AI-stressors (not just the established
technostressors), employers might protect employee well-being and performance during AI-driven
transformations.</p>
        <p>Similarly, policymakers can also incorporate these insights into AI governance and labor
regulations. Standards for algorithmic transparency, fairness, and human oversight in workplace AI
could reduce employees’ uncertainty and ethical strain [62]. Likewise, limits on AI-based surveillance
can protect worker autonomy and privacy, addressing stress from loss of control [40]. Finally,
supporting workforce reskilling and transition programs may alleviate STARA-related anxieties.
Such measures might help ensure that AI adoption’s benefits are realized without compromising
employee well-being.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Limitations and Future Research</title>
        <p>There are several limitations to consider in this review. First, our inclusion was restricted to studies
that explicitly examined stress related to AI, which may introduce a bias. These studies, by definition,
focused on the negative implications of AI use, and we excluded studies where AI was only discussed
in a positive or neutral light, without reference to stress. This means our review paints a deliberately
problem-focused picture and should not be read as indicating that AI always causes stress. Rather, it
identifies what the problems are when stress occurs. Relatedly, many included studies themselves
may suffer from a form of publication bias: researchers detecting issues might be more likely to
publish on AI-stress, whereas organizations that implemented AI with minimal stress might not
document those cases.</p>
        <p>Second, most empirical evidence is cross-sectional (surveys at one point in time), limiting causal
inference. We often assume AI factors cause stress, but it could also be that already-stressed
individuals perceive AI more negatively (a reverse causality or common method issue). Only a few
longitudinal studies [49,57,61,68,79,86] exist yet to confirm causality. Third, there is a geographical
bias: a significant number of studies were from East Asia (China, in particular, in studies of AI
awareness and stress) and Western countries. This bias is important to consider as cultural factors
can influence stress perceptions. For instance, surveillance might be even more stress-inducing in
cultures with high privacy expectations.</p>
        <p>Another limitation is that we did not formally weight studies by quality. We included conceptual
studies on equal footing with empirical ones in qualitative synthesis. While this provides breadth,
some claims (especially from conceptual studies) are not empirically validated. We attempted to
cross-verify concepts (e.g., ethical and moral conflict) with any available empirical hint, but the
reader should note which findings are strongly evidence-based (e.g., job insecurity – supported by
many surveys) versus more hypothetical (e.g., ethical and moral conflict, social erosion– fewer data
points). Additionally, our search strategy, confined to one database (Scopus) and specific keywords
might have missed relevant works that discuss similar phenomena under different terms (e.g.,
“strain” or “burnout” instead of stress). We believe the key themes would likely be similar, but future
reviews could expand to other databases or grey literature (such as reports on AI and worker
wellbeing) for a more exhaustive capture.</p>
        <p>Finally, with the area evolving rapidly, new types of AI (like advanced generative models) are just
beginning to be studied, although some of those studies (like the study by Wach et al. [45] on
ChatGPT) already suggesting potential stress issues (e.g., misinformation leading to stress, or new
training burdens). Our review’s recency (including studies up to early 2025) is a strength but also
means many studies are preliminary. A key strength of our review is its recency, covering studies
up to early 2025, though this also means many included studies are still preliminary. As AI tech and
its uses change, the stressors may also shift (for instance, if regulation curtails the most invasive
surveillance, that stressor might diminish. Similarly, if AI begins making managerial decisions
entirely, autonomy loss could worsen). Ongoing research will be needed to keep this knowledge up
to date.</p>
        <p>Building on this review, future studies could explore several avenues. One is to conduct
longitudinal research to observe how stress levels change from pre-AI implementation to
postimplementation, establishing causal links. This could also identify adaptation effects, such as whether
some stressors fade as employees get used to AI or whether they persist. Moreover, several of the
emerging AI-stressors identified in this review, including techno-unpredictability, loss of autonomy,
ethical and moral conflict, social erosion, and career disruption, remain conceptually underdeveloped
and lack validated measurement scales. This indicates a need for future research to formalize and
empirically test these constructs across diverse contexts. Another important direction is intervention
studies: testing what organizational practices or individual coping strategies can alleviate AI-induced
stress. For example, does training focusing on improving AI literacy reduce complexity-related stress
and increase challenge appraisal? Can participatory design (involving employees in AI tool
development) mitigate autonomy loss stress? These questions have practical significance.</p>
        <p>Similarly, another important direction is to investigate moderators and boundary conditions that
influence AI-stress relationships. The studies we reviewed hinted at several factors that may buffer
or exacerbate stress responses, but these remain under-studied. For example, personal characteristics
such as technological self-efficacy, tolerance for ambiguity, age, and IT experience likely shape how
employees appraise AI’s demands. Kim and Lee [48] underscore self-efficacy as crucial for mental
well-being during AI adoption, suggesting that similar individual differences (like a growth mindset
or openness to change) could moderate stress outcomes. Organizational factors like leadership style
and support climate are also ripe for further exploration – we saw that coaching leadership can
mitigate stress [44], but what about organizational culture, change management practices, or the
presence of strong social support networks among coworkers? Future studies could employ
moderation and mediation analysis to map out these contingencies. For instance, researchers could
ask: under what conditions does an AI that increases work pace not lead to burnout? Perhaps in
organizations with high perceived organizational support or in teams with adaptive norms, the
negative impact is softened (e.g., a study suggests organizational support as a buffer [78]).</p>
        <p>Likewise, cross-cultural comparisons would be valuable, as many studies in our review were
conducted in East Asia and Western Europe/North America, and cultural values (such as attitudes
toward privacy or uncertainty avoidance) could cause variations in stress perception. Comparative
research across different cultural or industry contexts can illuminate whether AI-induced
technostress is a universal phenomenon or one moderated by context.</p>
        <p>Additionally, expanding beyond the scope of our review, future work might incorporate related
outcomes such as well-being, job satisfaction, or mental health to see the broader impact of these
stressors. Several studies in our set linked AI stress to outcomes like decreased engagement or
increased turnover intentions [56] (e.g., STARA awareness leading to cynicism and lower job
satisfaction [61]). A meta-analytic approach in a few years might quantify the impact (if sufficient
homogeneous measures are collected).</p>
        <p>Finally, given the ethical dimension that emerged, interdisciplinary work bridging ethics, law,
and psychology could be fruitful. For instance, how do emerging AI governance frameworks (like
requiring explainability) alleviate or aggravate employee stress? Will clear AI accountability rules
reduce the moral distress and uncertainty currently felt? Such questions sit at the intersection of
policy and employee experience and would benefit from collaborative inquiry.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Declarations</title>
      <p>Competing Interests: The authors declare no competing interests or conflicts of interest relevant
to this work.</p>
      <p>Data Availability: The full list of the 66 included studies can be accessed here.</p>
      <p>Declaration on Generative AI: The author(s) have not employed any generative AI tools in the
development of this manuscript, aside from standard grammar-checking tools (e.g., Grammarly).
[9] C. Brod, Managing technostress: optimizing the use of computer technology, Pers J 61 (1982)
753–757.
[10] M. Tarafdar, Q. Tu, B.S. Ragu-Nathan, T.S. Ragu-Nathan, The Impact of Technostress on Role
Stress and Productivity, Journal of Management Information Systems 24 (2007) 301–328.
https://doi.org/10.2753/MIS0742-1222240109.
[11] T.S. Ragu-Nathan, M. Tarafdar, B.S. Ragu-Nathan, Q. Tu, The Consequences of Technostress
for End Users in Organizations: Conceptual Development and Empirical Validation, Information
Systems Research 19 (2008) 417–433. https://doi.org/10.1287/isre.1070.0165.
[12] C. Weinert, C. Maier, S. Laumer, T. Weitzel, Technostress mitigation: an experimental study of
social support during a computer freeze, J Bus Econ 90 (2020) 1199–1249.
https://doi.org/10.1007/s11573-020-00986-y.
[13] M. Tarafdar, C.L. Cooper, J. Stich, The technostress trifecta ‐ techno eustress, techno distress
and design: Theoretical directions and an agenda for research, Information Systems Journal 29
(2019) 6–42. https://doi.org/10.1111/isj.12169.
[14] C. Maier, S. Laumer, J. Wirth, T. Weitzel, Technostress and the hierarchical levels of
personality: a two-wave study with multiple data samples, European Journal of Information Systems 28
(2019) 496–522. https://doi.org/10.1080/0960085X.2019.1614739.
[15] S.C. Srivastava, S. Chandra, A. Shirish, Technostress creators and job outcomes: theorising the
moderating influence of personality traits, Information Systems Journal 25 (2015) 355–401.
https://doi.org/10.1111/isj.12067.
[16] S. Zhang, L. Zhao, Y. Lu, J. Yang, Do you get tired of socializing? An empirical explanation of
discontinuous usage behaviour in social network services, Information &amp; Management 53 (2016)
904–914. https://doi.org/10.1016/j.im.2016.03.006.
[17] M. Tarafdar, E.Bolman. Pullins, T.S. Ragu‐Nathan, Technostress: negative effect on
performance and possible mitigations, Information Systems Journal 25 (2015) 103–132.
https://doi.org/10.1111/isj.12042.
[18] C.B. Califf, S. Sarker, S. Sarker, The Bright and Dark Sides of Technostress: A Mixed-Methods
Study Involving Healthcare IT, MISQ 44 (2020) 809–856.
https://doi.org/10.25300/MISQ/2020/14818.
[19] Ayyagari, Grover, Purvis, Technostress: Technological Antecedents and Implications, MIS</p>
      <p>Quarterly 35 (2011) 831. https://doi.org/10.2307/41409963.
[20] H. Pirkkalainen, M. Salo, M. Tarafdar, M. Makkonen, Deliberate or Instinctive? Proactive and
Reactive Coping for Technostress, Journal of Management Information Systems 36 (2019) 1179–
1212. https://doi.org/10.1080/07421222.2019.1661092.
[21] P. Man Tang, J. Koopman, S.T. McClean, J.H. Zhang, C.H. Li, D. De Cremer, Y. Lu, C.T.S. Ng,
When Conscientious Employees Meet Intelligent Machines: An Integrative Approach Inspired
by Complementarity Theory and Role Theory, AMJ 65 (2022) 1019–1054.
https://doi.org/10.5465/amj.2020.1516.
[22] S. Chen, X. Wen, S. Ke, Q. Ni, R. Xu, W. He, What does intelligentization bring? A perspective
from the impact of mental workload on operational risk, Transportation Research Part E:
Logistics and Transportation Review 194 (2025) 103944. https://doi.org/10.1016/j.tre.2024.103944.
[23] K. Tam, R. Hopcraft, T. Crichton, K. Jones, The potential mental health effects of remote
control in an autonomous maritime world, Journal of International Maritime Safety, Environmental
Affairs, and Shipping 5 (2021) 40–55. https://doi.org/10.1080/25725084.2021.1922148.
[24] A. Choudhury, O. Asan, Impact of cognitive workload and situation awareness on clinicians’
willingness to use an artificial intelligence system in clinical practice, IISE Transactions on
Healthcare Systems Engineering 13 (2023) 89–100.
https://doi.org/10.1080/24725579.2022.2127035.
[25] L. Jafri, A.J. Farooqui, J. Grant, U. Omer, R. Gale, S. Ahmed, A.H. Khan, I. Siddiqui, F. Ghani, H.</p>
      <p>Majid, Insights from semi-structured interviews on integrating artificial intelligence in clinical
chemistry laboratory practices, BMC Med Educ 24 (2024) 170.
https://doi.org/10.1186/s12909024-05078-x.
[26] L. Irgang, A. Sestino, H. Barth, M. Holmén, Healthcare workers’ adoption of and satisfaction
with artificial intelligence: The counterintuitive role of paradoxical tensions and paradoxical
mindset, Technological Forecasting and Social Change 212 (2025) 123967.
https://doi.org/10.1016/j.techfore.2024.123967.
[65] H. Khairy, M. Ahmed, A. Asiri, F. Gazzawe, M. Abdel Fatah, N. Ahmad, A. Qahmash, M. Agina,
Catalyzing Green Work Engagement in Hotel Businesses: Leveraging Artificial Intelligence,
Sustainability 16 (2024) 7102. https://doi.org/10.3390/su16167102.
[66] A. Kumar, B. Krishnamoorthy, S.S. Bhattacharyya, Machine learning and artificial
intelligenceinduced technostress in organizations: a study on automation-augmentation paradox with
sociotechnical systems as coping mechanisms, IJOA 32 (2024) 681–701.
https://doi.org/10.1108/IJOA-01-2023-3581.
[67] P. Cappelli, R. Nehmeh, HR’s New Role, Harvard Business Review 2024 (2024).
https://www.scopus.com/inward/record.uri?eid=2-s2.085195155063&amp;partnerID=40&amp;md5=a4dec74a1c2ce407d47d8a265acd36c1.
[68] Y.-T. Chuang, H.-L. Chiang, A.-P. Lin, Insights from the Job Demands–Resources Model: AI’s
dual impact on employees’ work and life well-being, International Journal of Information
Management 83 (2025) 102887. https://doi.org/10.1016/j.ijinfomgt.2025.102887.
[69] J. Ye, The impact of electronic health record–integrated patient-generated health data on
clinician burnout, Journal of the American Medical Informatics Association 28 (2021) 1051–1056.
https://doi.org/10.1093/jamia/ocab017.
[70] S. Verma, V. Singh, A.A. Tudoran, S.S. Bhattacharyya, Elevating employees’ psychological
responses and task performance through responsible artificial intelligence, ITP 37 (2024) 2551–
2567. https://doi.org/10.1108/ITP-05-2023-0431.
[71] C.W. Grant, J. Marrero‐Polanco, J.B. Joyce, B. Barry, A. Stillwell, K. Kruger, T. Anderson, H.</p>
      <p>Talley, M. Hedges, J. Valery, R. White, R.R. Sharp, P.E. Croarkin, L.N. Dyrbye, W.V. Bobo, A.P.
Athreya, Pharmacogenomic augmented machine learning in electronic health record alerts: A
health system‐wide usability survey of clinicians, Clinical Translational Sci 17 (2024) e70044.
https://doi.org/10.1111/cts.70044.
[72] K. Meduri, G.S. Nadella, H. Gonaygunta, D. Kumar, S.R. Addula, S. Satish, M.H. Maturi, S.U.</p>
      <p>Rehman, Human-centered AI for personalized workload management: A multimodal approach
to preventing employee burnout, J. Infras. Policy. Dev. 8 (2024) 6918.
https://doi.org/10.24294/jipd.v8i9.6918.
[73] Y. Chen, Z. Wu, P. Wang, L. Xie, M. Yan, M. Jiang, Z. Yang, J. Zheng, J. Zhang, J. Zhu,
Radiology Residents’ Perceptions of Artificial Intelligence: Nationwide Cross-Sectional Survey Study,
J Med Internet Res 25 (2023) e48249. https://doi.org/10.2196/48249.
[74] J. Choudrie, N. Manandhar, C. Castro, C. Obuekwe, Hey Siri, Google! Can you help me? A
qualitative case study of smartphones AI functions in SMEs, Technological Forecasting and
Social Change 189 (2023) 122375. https://doi.org/10.1016/j.techfore.2023.122375.
[75] Y. Huang, D. Gursoy, How does AI technology integration affect employees’ proactive service
behaviors? A transactional theory of stress perspective, Journal of Retailing and Consumer
Services 77 (2024) 103700. https://doi.org/10.1016/j.jretconser.2023.103700.
[76] S. Zhou, N. Yi, R. Rasiah, H. Zhao, Z. Mo, An empirical study on the dark side of service em
ployees’ AI awareness: Behavioral responses, emotional mechanisms, and mitigating factors,
Journal of Retailing and Consumer Services 79 (2024) 103869.
https://doi.org/10.1016/j.jretconser.2024.103869.
[77] B. Cheng, H. Lin, Y. Kong, Challenge or hindrance? How and when organizational artificial
intelligence adoption influences employee job crafting, Journal of Business Research 164 (2023)
113987. https://doi.org/10.1016/j.jbusres.2023.113987.
[78] M. Yin, S. Jiang, X. Niu, Can AI really help? The double-edged sword effect of AI assistant on
employees’ innovation behavior, Computers in Human Behavior 150 (2024) 107987.
https://doi.org/10.1016/j.chb.2023.107987.
[79] J. Cao, Z. Song, An incoming threat: the influence of automation potential on job insecurity,</p>
      <p>APJBA 17 (2025) 116–135. https://doi.org/10.1108/APJBA-07-2022-0328.
[80] L. Ding, Employees’ challenge-hindrance appraisals toward STARA awareness and competitive
productivity: a micro-level case, IJCHM 33 (2021) 2950–2969.
https://doi.org/10.1108/IJCHM-092020-1038.
[81] G. Xu, M. Xue, J. Zhao, The Relationship of Artificial Intelligence Opportunity Perception and
Employee Workplace Well-Being: A Moderated Mediation Model, IJERPH 20 (2023) 1974.
https://doi.org/10.3390/ijerph20031974.
[82] V.V. Muthuswamy, Impact of Cybersecurity and AI’s Related Factors on Incident Reporting
Suspicious Behaviour and Employees Stress: Moderating Role of Cybersecurity Training,
International Journal of Cyber Criminology 18 (2024) 83–107.
https://cybercrimejournal.com/menuscript/index.php/cybercrimejournal/article/view/330.
[83] X. Dong, Y. Tian, M. He, T. Wang, When knowledge workers meet AI? The double-edged
sword effects of AI adoption on innovative work behavior, JKM 29 (2025) 113–147.
https://doi.org/10.1108/JKM-02-2024-0222.
[84] J. Kang, H. Shin, C. Kang, Hospitality labor leakage and dynamic turnover behaviors in the age
of artificial intelligence and robotics, JHTT 15 (2024) 916–933.
https://doi.org/10.1108/JHTT-122023-0411.
[85] G. Xu, M. Xue, J. Zhao, The Association between Artificial Intelligence Awareness and
Employee Depression: The Mediating Role of Emotional Exhaustion and the Moderating Role of
Perceived Organizational Support, IJERPH 20 (2023) 5147.
https://doi.org/10.3390/ijerph20065147.
[86] A. Presbitero, M. Teng-Calleja, Job attitudes and career behaviors relating to employees’ per
ceived incorporation of artificial intelligence in the workplace: a career self-management
perspective, PR 52 (2023) 1169–1187. https://doi.org/10.1108/PR-02-2021-0103.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Singla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sukharevsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chui</surname>
          </string-name>
          ,
          <article-title>The state of AI: How organizations are rewiring to capture value</article-title>
          ,
          <source>McKinsey &amp; Company</source>
          ,
          <year>2025</year>
          . https://www.mckinsey.com/capabilities/quantumblack/our
          <article-title>-insights/the-state-of-ai#/ (accessed May 19,</article-title>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <article-title>The work affective well-being under the impact of AI</article-title>
          ,
          <source>Sci Rep</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <article-title>25483</article-title>
          . https://doi.org/10.1038/s41598-024-75113-w.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.-C.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <surname>Does</surname>
            <given-names>AI</given-names>
          </string-name>
          -Driven
          <source>Technostress Promote or Hinder Employees' Artificial Intelligence Adoption Intention? A Moderated Mediation Model of Affective Reactions and Technical Self-Efficacy, PRBM</source>
          Volume
          <volume>17</volume>
          (
          <year>2024</year>
          )
          <fpage>413</fpage>
          -
          <lpage>427</lpage>
          . https://doi.org/10.2147/PRBM.S441444.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hou</surname>
          </string-name>
          , L. Fan,
          <article-title>Working with AI: The Effect of Job Stress on Hotel Employees' Work Engagement</article-title>
          ,
          <source>Behavioral Sciences</source>
          <volume>14</volume>
          (
          <year>2024</year>
          )
          <article-title>1076</article-title>
          . https://doi.org/10.3390/bs14111076.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Issa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jaber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lakkis</surname>
          </string-name>
          , Navigating AI unpredictability:
          <article-title>Exploring technostress in AI-powered healthcare systems</article-title>
          ,
          <source>Technological Forecasting and Social Change</source>
          <volume>202</volume>
          (
          <year>2024</year>
          )
          <article-title>123311</article-title>
          . https://doi.org/10.1016/j.techfore.
          <year>2024</year>
          .
          <volume>123311</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.S.</given-names>
            <surname>Lazarus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Folkman</surname>
          </string-name>
          , Stress, Appraisal, and Coping, Springer Publishing Company, New York,
          <year>1984</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tarafdar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.L.</given-names>
            <surname>Cooper</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Stich,</surname>
          </string-name>
          <article-title>The technostress trifecta ‐ techno eustress, techno distress and design: Theoretical directions and an agenda for research</article-title>
          ,
          <source>Information Systems Journal</source>
          <volume>29</volume>
          (
          <year>2019</year>
          )
          <fpage>6</fpage>
          -
          <lpage>42</lpage>
          . https://doi.org/10.1111/isj.12169.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Salo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Pirkkalainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Makkonen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Hekkala</surname>
          </string-name>
          , Distress, Eustress, or No Stress?
          <article-title>Explaining Smartphone Users' Different Technostress Responses</article-title>
          ,
          <source>in: Proceedings of the 39th International Conference on Information Systems (ICIS</source>
          <year>2018</year>
          ),
          <article-title>Association for Information Systems</article-title>
          , San Francisco, California, USA,
          <year>2018</year>
          . https://aisel.aisnet.org/icis2018/behavior/Presentations/13/.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>