<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>L. Jiříčková)
Proceedings
published</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Towards Fourth-Order Cybernetics in AI Gov- ernance: SocioTechnical Perspectives on AI Risk Management in Knowledge-Based Organizations</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ludmila Jiříčková</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Based Organizations, SMEs, Adaptive Risk Management, Reflexivity1</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>North Macedonia</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>202</volume>
      <fpage>5</fpage>
      <lpage>12</lpage>
      <abstract>
        <p>As artificial intelligence (AI) becomes increasingly embedded in knowledge-based small and medium-sized enterprises (SMEs), especially in consulting, education, and finance, managing AI-related risks requires more than static compliance. This paper introduces a novel governance framework by integrating sociotechnical systems (STS) theory with fourth-order cybernetics. Unlike conventional approaches that treat governance as a one-time setup, this paper conceptualizes AI governance as a reflexive, multi-layered, and adaptive feedback process embedded in organizational context. It is critical review four major AI governance frameworks (EU AI Act, OECD AI Principles, IEEE Ethically Aligned Design, and UNESCO Recommendation), highlighting their limitations in addressing dynamic socio-technical risks. In response, it is proposed a cybernetic model designed for SMEs, combining participatory co-design, continuous monitoring, contextual awareness, and adaptive compliance. Practical use cases from consulting, EdTech, and fintech illustrate how this model supports responsible AI innovation while enhancing resilience and trust. This contribution offers a forward-looking pathway for aligning AI systems with both human values and evolving operational realities.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>AI Governance</kwd>
        <kwd>Fourth-Order Cybernetics</kwd>
        <kwd>Socio-Technical Systems</kwd>
        <kwd>Knowledge-</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Background: Socio-Technical Systems and Cybernetics</title>
      <p>The socio-technical systems (STS) perspective originated in the mid-20th century (Tavistock
Institute, Scandinavia) to reconcile technology and work. Ropohl (1999) explains that STS theory was
designed “to stress the reciprocal interrelationship between humans and machines” and to shape
both the technical and social conditions of work. In other words, STS aims to jointly optimize tools
and human organization, improving both efficiency and human well-being. Enid Mumford and
colleagues extended these ideas into information systems; for instance, Mumford’s ETHICS method
explicitly integrated employee needs into system design. The core notion is that technology must
be understood in its human/organizational context, not in isolation.</p>
      <p>Parallel to STS in organizational studies, the field of cybernetics has evolved through multiple
“orders” of system dynamics. First-order cybernetics (Wiener, 1948) viewed control systems from an
external perspective, while second-order cybernetics (von Foerster, 1974) introduced
self-observation and reflexivity (systems as both objects and observers). More recently, scholars have
conceptualized third-order and fourth-order cybernetics to address increasingly complex systems. In
particular, fourth-order cybernetics asks what happens when a system can “redefine itself” within its
environment. Chiolerio (2020) notes that fourth-order cybernetics “focuses on the integration of a
system within its larger, co-defining context” and implies the system will “immerge” into its
environment. In practical terms, this suggests viewing organizations (and their governance) as dynamic,
self-modifying entities rather than static frameworks.</p>
      <p>STS and cybernetics intersect naturally in the AI era. Knowledge-based firms now rely on
dataintensive automation and digital collaboration, increasing the complexity of socio-technical issues
(e.g. algorithmic impacts on work, learning, and privacy). Several authors argue that conventional
STS approaches—often focusing on one-time design optimizations—may not suffice for such
dynamic contexts. Instead, ongoing adaptation and reflexive learning are needed. Bednar and Welch
(2020) argue that smart working environments require a socio-technical redesign that sustains both
technological performance and human meaning, thus emphasizing the need for systems thinking
at the organizational level. This has prompted interest in higher-order cybernetic concepts
(sociocybernetics) to model organizations and technologies co-evolving. The combination of STS
(emphasizing human–tech relationships) with fourth-order cybernetics (emphasizing system
reconfiguration and context-embedding) provides a rich theoretical basis for next-generation AI
governance.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Critical Review of Socio-Technical Development</title>
      <p>Over the past decades, STS research has broadened from local workplace design to enterprise and
societal levels. Early STS interventions (e.g. Tavistock, Scandinavian participative design) targeted
joint optimization of tools and tasks. Mumford and Legge (1978) illustrated STS in practice by
redesigning offices and workflows to enhance both job satisfaction and productivity. However,
critiques have emerged. Some researchers note that STS can be too idealistic, potentially neglecting
external pressures like market forces and power relations. In practice, purely socio-technical
interventions may falter if they ignore wider constraints. Nevertheless, the fundamental insight
persists: technology and organization are inseparable; socio-technical integration can make systems
more sustainable and humane.</p>
      <p>Digital transformation and AI add new dimensions to this phenomenon. In knowledge-based SMEs,
data-driven tools and AI agents interact with experts and clients in complex ways, leading to
emergent risks (bias, mission creep, deskilling). Traditional STS methods typically treat governance
design as a discrete phase. By contrast, sociocybernetic thinking suggests continuous adaptation.
Fourth-order cybernetics has been invoked to fill this gap: it envisages systems (and meta-systems)
that continuously self-observe and reconfigure. Chiolerio (2020) explicitly describes fourth-order
cybernetics as a realm where a system “redefines itself” within its context. From this perspective,
one can critique existing STS-based governance as often too static; a truly socio-technical
governance must enable organizations to learn and transform their own rules. In summary, the STS
tradition provides rich insights into human–tech integration, but applying it to modern AI
challenges calls for higher-order reflexivity – an adaptive, context-sensitive governance loop.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Theoretical Contribution: Fourth-Order Socio-Technical Perspective</title>
      <p>Building on the above, this theoretical contribution is to synthesize STS theory with Fourth-Order
Cybernetics to reconceptualize AI governance. It is proposed treating AI governance frameworks
themselves as socio-technical cybernetic systems. In this view, the governance apparatus (policies,
committees, monitoring tools) is not a one-way controller but a reflexive agent embedded in the
organization’s context. It has a dual nature: on one level, it imposes rules and standards on AI
systems; on another level, it continually observes AI outcomes, stakeholder feedback, and
environmental changes, then adapts its own rules and processes accordingly.</p>
      <p>P2P Foundation’s description of a fourth-order system captures this: “The 4th Order system is
contextualized, embedded and integrated into the context… It operates both as a system in its
context, and as a system that is part of the context”. Applied to governance, this means an
organization’s AI risk processes must not only influence the technical system, but also evolve from
lessons learned. For example, rather than treating risk categories as fixed, a fourth-order approach
would allow the organization to revise what it considers “high risk” based on emerging information.
In short, the model envisions governance as a meta-system with self-awareness: it steers itself via
feedback loops. This self-regulatory, embedding quality (cf. Chiolerio’s “immergence” into the
environment) sets the stage for more robust, sustainable AI oversight.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Comparative Analysis of Leading Frameworks</title>
      <p>Four prominent AI governance frameworks illustrate the current landscape. The OECD AI
Principles (2019; updated 2024) offer voluntary guidelines emphasizing trust and democracy. They call
for AI that is “innovative and trustworthy” while respecting human rights and democratic values.
The OECD enumerates values such as inclusive growth, transparency, robustness, fairness and
accountability, and urges international cooperation in governance. These principles serve as a
nonbinding baseline: many countries (including EU members) cite them in national policies. While
broad, the OECD framework lacks specific enforcement mechanisms, relying on governments and
industry to interpret it.</p>
      <p>
        In contrast, the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is a binding law. The
European Commission bills it as the “first-ever comprehensive legal framework on AI,” aimed at
fostering “trustworthy AI in Europe”. The Act adopts a risk-based approach: certain “unacceptable”
AI uses are banned outright (e.g. social scoring systems or biometric surveillance in public spaces).
“High-risk” systems (such as AI for hiring, credit scoring, or medical devices) must undergo strict
premarket checks: providers must conduct risk assessments, ensure high-quality training data,
maintain detailed documentation, and implement human oversight and robustness measures.
Lesser-risk tools (e.g. AI chatbots) are subject to transparency requirements (users must be notified
they interact with AI). To ease SME compliance, the Act includes supportive measures: for
instance, SMEs get priority
access to regulatory AI “sandboxes” and benefit from reduced fees and lighter reporting obligations.
Critics have raised concerns: some warn that the Act’s broad definition of AI might inadvertently
capture simple software, potentially “stifling further innovation” with unclear boundaries.
The IEEE Ethically Aligned Design (EAD) documents are voluntary industry guidelines produced
by the IEEE Global Initiative on the Ethics of Autonomous and Intelligent Systems. They are not
legally enforceable but are influential in engineering practice. EAD is explicitly human-centric. It is
subtitled “A Vision for Prioritizing Human Well-being with Autonomous and Intelligent
Systems”
        <xref ref-type="bibr" rid="ref26">(Floridi &amp; Cowls, 2019)</xref>
        . Version I of EAD articulates high-level ethical principles and
concrete recommendations, aiming to ensure AI “provably aligns with and improves holistic societal
wellbeing”. EAD covers a wide range of issues—from privacy and accountability to ecological
sustainability—and it has spurred related standards (e.g. the IEEE P7000 series) and certification
efforts. However, as a self-regulatory vision, EAD leaves actual implementation up to organizations;
it relies on technologists and policymakers to translate principles into practice.
      </p>
      <p>The UNESCO Recommendation on the Ethics of AI (2021) is the first global intergovernmental AI
ethics framework. It explicitly centers human rights and dignity: “the protection of human rights
and dignity is the cornerstone of the Recommendation,” with specific emphasis on transparency,
fairness, and the importance of human oversight of AI. UNESCO’s document extends beyond
general values by defining ten Policy Action Areas (e.g. data governance, environment, education,
health, culture) that member states should address in AI policies. In practice, UNESCO’s approach
provides a normative standard for states and organizations, though it lacks legal binding force. The
UNESCO framework underscores broad societal considerations (e.g. equity, literacy, gender
inclusion) alongside technical ethics.</p>
      <p>In comparing these frameworks, common themes emerge: all stress trustworthy, human-centric AI
(rights, transparency, fairness, accountability), and call for multi-stakeholder cooperation. However,
they differ in scope and mechanism. The EU Act is detailed law with penalties, while OECD and
UNESCO are non-binding principles. IEEE EAD is a private-sector initiative focusing on design.
Notably, none of these frameworks explicitly incorporates fourth-order systemic dynamics. In
practice, each provides static goals or rules but offers limited guidance on how organizations
should adapt their governance processes over time. From a fourth-order perspective, this is a key
limitation: governing AI only through fixed principles or checklists may miss how risks evolve. A
fourth-order view suggests that we need governance models that themselves learn and reconfigure
based on experience. For example, whereas the EU Act mandates documentation, a 4th-order
approach would also ask how those documents feed back into policy revision and organizational
learning. In summary, existing frameworks supply important content but largely assume that
organizations will implement them as given. They do not prescribe how organizations can co-evolve
with their environments, which is precisely the gap our approach seeks to fill.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Proposed Socio-Technical Cybernetic Governance Model</title>
      <p>To address these gaps, this paper proposes a novel governance model grounded in socio-technical
systems theory and fourth-order cybernetics. The model treats AI governance as an adaptive
sociotechnical feedback process, not a one-time compliance checklist. Its key components are:
 Layered Reflexive Governance: Risk management is structured in layers mirroring
organizational scales. At the technical layer, firms implement continuous monitoring (e.g.
automated logging, explainability and bias-detection tools) on their AI systems. At the
human/organizational layer, cross-functional teams (engineers, domain experts, ethicists,
and end-users) hold regular review meetings. These teams interpret the technical data, assess
social implications, and decide adjustments. At a higher “meta” layer, industry consortia or
regulator-led coalitions aggregate lessons from multiple firms. In effect, each layer provides
feedback to the others, creating double-loop learning.
 Participatory Co-Design and Oversight: Consistent with STS principles, the model embeds
stakeholder engagement throughout the AI lifecycle. For example, when developing an AI
tool, an SME would hold co-design workshops with employees and even clients to gather
requirements and detect potential harms early. During deployment, affected parties (users,
customers) can report issues (e.g. through surveys or community forums). Governance
bodies then incorporate this feedback. Such practices echo sociotechnical guidance: companies
should “co-design technical solutions with the communities who stand to benefit” and
engage domain experts and users to inform design. In essence, oversight is shared: the
organization and its stakeholders jointly steer the AI’s evolution.
 Context-Integrated Monitoring: The governance system continuously integrates external
context signals. For instance, it watches for regulatory changes (like updates to the EU AI
Act), market shifts, or public concerns. If a new law is proposed or a public controversy
erupts (say, about student data privacy), the SME’s governance team revises its risk matrix
accordingly. This reflects the 4th-order idea that a system is “embedded in its context”.
Practically, this might involve automated news monitoring and scheduled policy reviews.
The key is that governance does not react only to internal incidents but remains sensitive
to the broader environment, “immerging” into it.
 Adaptive Compliance Processes: Rather than static checklists, policies and controls are
treated as living documents. An internal AI ethics code or risk policy is periodically reviewed
and updated based on new experiences. Technical controls (e.g. model retraining frequency,
privacy thresholds) are adjusted via ongoing metrics and feedback. For example, if an audit
finds a persistent bias, the model is retrained with more diverse data. The organization
documents not just compliance records, but also lessons learned and process changes,
feeding them back into a continuous improvement loop. This cybernetic self-regulation
means the governance “rules of the game” evolve as the game is played.</p>
      <p>Together, these features create a socio-technical cybernetic loop: technical performance and social
inputs inform each other iteratively. The model leverages existing structures where available. For
instance, an SME can use the EU AI Act’s classification as a starting taxonomy, but then enrich it
with internal risk categories based on its context. Czech and European knowledge-based SMEs can
plug into national AI programs: they might test their AI in an EU-regulator-backed sandbox (as
SMEs receive priority access) and share findings through Digital Innovation Hubs or industry
consortia.</p>
      <p>The goal is to situate each organization within a learning ecosystem. In effect, the firm becomes
both a subject and object of governance: it helps shape norms even as it follows them, exemplifying
a 4th-order “meta-system” stance.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Discussion: Implications and Use Cases</title>
      <p>This fourth-order socio-technical governance model has several implications. It operationalizes
responsible innovation principles by embedding reflexivity, inclusion, and anticipation into AI risk
management. In practice, it means organizations are proactive learners, not just rule-followers. For
example, by institutionalizing stakeholder feedback, firms can anticipate unintended harms (e.g.
algorithmic bias) before they escalate. By continuously scanning the environment, they can adapt
faster than regulatory cycles. Policy-wise, this suggests that regulators and standards bodies should
encourage dynamic compliance (for example, by recognizing continuous improvement reports or
iterative certification), not just one-time audits. The model also builds trust: when employees and
clients see that an organization takes feedback seriously and adjusts its technology, confidence in AI
grows.</p>
      <p>The following use cases illustrate how knowledge-based SMEs might apply the model:
 Consulting SME: A small consulting firm develops an AI analytics tool for clients. Using our
model, the firm first co-designs the tool with senior consultants and pilot clients, uncovering
initial assumptions (e.g. which market data matters). Upon deployment, the firm collects
feedback from consultants using the tool – perhaps clients report that certain
recommendations seem biased or irrelevant. The governance team then investigates, retrains
the model on better data, and updates the tool. The team monitors technical metrics
(accuracy, fairness) alongside business metrics (client satisfaction, revenue impact), and
holds monthly review sessions. If a new data privacy regulation is announced, the firm
revises its data handling policy immediately. In this way, the tool evolves based on both
quantitative logs and qualitative user insights, embodying a continuous STS feedback loop.
 Educational SME: An EdTech startup offers an AI-driven personalized learning platform.</p>
      <p>Following our approach, it pilots the system in classrooms and gathers teacher and student
input on the learning content. Teachers note if the AI’s suggestions are pedagogically sound
and culturally appropriate. The startup’s oversight board (including educators) uses this
feedback to adjust the recommendation algorithms and to define new usage guidelines. The
platform also transparently informs students when content is AI-generated, addressing
UNESCO’s call for transparency and human oversight. Importantly, the governance team
tracks changes in curriculum standards or public concerns about AI in education. For
example, if an education authority updates its digital literacy curriculum, the firm updates the
AI content to match. In sum, the educational SME governs its AI system by tightly coupling
technological adjustments with social context and stakeholder oversight.
 Financial SME: A small fintech company deploys an AI system for SME loan approvals.</p>
      <p>Recognizing the “high-risk” nature of credit decisions, the firm applies the EU Act’s rules
(strict testing, traceability, human review) and layers on this model. The AI system includes
extensive logging and fairness metrics (fulfilling the Act’s documentation and accuracy
requirements). An internal ethics committee (with finance experts and customer advocates)
reviews any contentious decisions. The company participates in an EU regulatory sandbox
(which offers free priority access for SMEs) to test its model under supervision. It also scans
financial news and policy forecasts: if the economy shifts or if new laws on algorithmic
lending are proposed, the model parameters or risk thresholds are adjusted accordingly.
Thus, governance is not a single compliance project but an ongoing adaptive process, with
the firm constantly co-evolving its AI model, aligning with the fourth-order notion of a
self-modifying system.</p>
      <p>In each case, the model synthesizes legal rules and ethical ideals into practice. It leverages formal
mechanisms (the EU Act’s obligations, OECD best-practice, UNESCO’s human-rights lens, IEEE’s
wellbeing focus) but situates them in a reflexive organizational process. Practical supports like EU
sandboxes and Digital Innovation Hubs become nodes in the feedback network. Crucially, the
emphasis is on how the organization uses these tools: co-design workshops, iterative testing, and
open communication channels operationalize the abstract principles. This is aligned with
sociotechnical best-practices: for example, a known recommendation is to involve future technology
users early and iteratively, which this model institutionalizes.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusion</title>
      <p>This paper has proposed that bringing Fourth-Order Cybernetics into socio-technical AI governance
can address key limitations of existing approaches. Our review showed that while frameworks like
the EU AI Act, OECD Principles, IEEE EAD, and UNESCO Recommendations provide valuable
guidance on what values to uphold, they often assume governance is static. By contrast, a
fourthorder STS perspective treats governance itself as an evolving, embedded system. Our novel model
makes this explicit, framing AI risk management as a continuous, multi-layered feedback loop.
This deepens the connection between technology and context, ensuring that Czech and European
knowledge-based SMEs can adapt their AI usage responsibly over time.</p>
      <p>In doing so, the model also resonates with responsible innovation discourse: it embeds reflexivity,
anticipatory thinking, and stakeholder inclusion at the core of governance. Policymakers and
standards bodies should note this orientation, perhaps by incentivizing adaptive compliance (e.g. by
recognizing “learning” reports or dynamic risk assessments). Future research should empirically
evaluate this framework: case studies could document how SMEs implement layered governance
and whether it leads to fewer AI-related incidents. Overall, embracing a fourth-order
socio-technical outlook offers a promising pathway to more resilient and responsible AI deployment in
practice.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used GPT-4 in order to: Grammar and spelling
check. After using this tool, the author reviewed and edited the content as needed and takes full
responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ropohl</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>Philosophy of socio-technical systems</article-title>
          .
          <source>Science, Technology &amp; Society</source>
          ,
          <volume>4</volume>
          (
          <issue>3</issue>
          ),
          <fpage>59</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Leitch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Warren</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>ETHICS: The past, present and future of socio-technical systems design</article-title>
          . In P. Trischler &amp; H.
          <string-name>
            <surname>Schtzer</surname>
          </string-name>
          (Eds.),
          <source>History of Computing (IFIP AICT</source>
          , Vol.
          <volume>325</volume>
          , pp.
          <fpage>189</fpage>
          -
          <lpage>197</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>OECD.</surname>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>AI Principles Overview</article-title>
          .
          <article-title>Retrieved from OECD</article-title>
          .
          <source>AI</source>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          . (
          <year>2024</year>
          ).
          <article-title>Shaping Europe's Digital Future: The AI Act</article-title>
          .
          <source>Niehaus &amp; Wiesche</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>ArtificialIntelligenceAct.eu (Statworx).</surname>
          </string-name>
          (
          <year>2025</year>
          ).
          <article-title>Small Businesses' Guide to the AI Act</article-title>
          . Kumar et al.,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>IEEE</given-names>
            <surname>Global</surname>
          </string-name>
          <article-title>Initiative on Ethics of A/IS</article-title>
          . (
          <year>2019</year>
          ).
          <article-title>Ethically Aligned Design (</article-title>
          <year>v1</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>UNESCO.</surname>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Recommendation on the Ethics of AI</article-title>
          . Hagendorff,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Bogen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Winecoff</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Applying Sociotechnical Approaches to AI Governance in Practice. Center for Democracy &amp; Technology.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Chiolerio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Liquid Cybernetic Systems: The Fourth-Order Cybernetics</article-title>
          . Advanced
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>“Fourth Order</surname>
            <given-names>Cybernetics</given-names>
          </string-name>
          ” (
          <year>2023</year>
          ). P2P Foundation.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Clifford</surname>
            <given-names>Chance.</given-names>
          </string-name>
          (
          <year>2023</year>
          , April).
          <source>The EU AI Act: Concerns and Criticism</source>
          . Retrieved from Clif-
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>European</surname>
            <given-names>Commission.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <source>Czech Republic AI Strategy Report. Retrieved from AI Watch</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Baxter</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sommerville</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Socio-technical systems: From design methods to systems engineering</article-title>
          .
          <source>Interacting with Computers</source>
          ,
          <volume>23</volume>
          (
          <issue>1</issue>
          ),
          <fpage>4</fpage>
          -
          <lpage>17</lpage>
          . https://doi.org/10.1016/j.intcom.
          <year>2010</year>
          .
          <volume>07</volume>
          .003
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Cheong</surname>
            ,
            <given-names>B. C.</given-names>
          </string-name>
          , et al. (
          <year>2024</year>
          ).
          <article-title>The sociotechnical entanglement of AI and values</article-title>
          .
          <source>AI &amp; Society</source>
          . https://doi.org/10.1007/s00146-023-01852-5
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Bednar</surname>
            ,
            <given-names>P.#M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Welch</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Socio$technical perspectives on smart working: Creating meaningful and sustainable systems</article-title>
          .
          <source>Information Systems Frontiers</source>
          ,
          <volume>22</volume>
          (
          <issue>2</issue>
          ),
          <fpage>281</fpage>
          -
          <lpage>298</lpage>
          . https://doi.org/10.1007/s10796-019-09921-1
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Makarius</surname>
            ,
            <given-names>E. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mukherjee</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fox</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fox</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Rising with the machines: A sociotechnical framework for bringing arti"icial intelligence into the organization</article-title>
          .
          <source>Journal of Business Research</source>
          ,
          <volume>120</volume>
          ,
          <fpage>262</fpage>
          -
          <lpage>273</lpage>
          . https://doi.org/10.1016/j.jbusres.
          <year>2020</year>
          .
          <volume>07</volume>
          .045
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Chiolerio</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Liquid cybernetic systems: The fourth-order cybernetics</article-title>
          .
          <source>Advanced Intelligent Systems</source>
          ,
          <volume>2</volume>
          (
          <issue>12</issue>
          ),
          <volume>2000120</volume>
          . https://doi.org/10.1002/aisy.202000120
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Dalpiaz</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Giorgini</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Mylopoulos</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Adaptive socio-technical systems: A requirements-based approach</article-title>
          .
          <source>Requirements Engineering</source>
          ,
          <volume>18</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          . https://doi.org/10.1007/s00766-011-0132-1
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Niehaus</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wiesche</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>A socio-technical perspective on organizational interaction with AI: A literature review</article-title>
          .
          <source>Proceedings of the European Conference on Information Systems (ECIS</source>
          <year>2021</year>
          ). https://aisel.aisnet.
          <source>org/ecis2021_rp/156</source>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krishnamoorthy</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Bhattacharyya</surname>
            ,
            <given-names>S. S.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Machine learning and arti"icial intelligence-induced technostress in organizations</article-title>
          .
          <source>International Journal of Organizational Analysis</source>
          ,
          <volume>32</volume>
          (
          <issue>4</issue>
          ). https://doi.org/10.1108/IJOA-01-2023-3581
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>An intelligent sociotechnical systems (iSTS) framework</article-title>
          . https://doi.org/10.48550/arXiv.2401.03223
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Dean</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gilbert</surname>
            ,
            <given-names>T. K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lambert</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zick</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2021</year>
          ).
          <article-title>Axes for sociotechnical inquiry in AI research</article-title>
          . https://doi.org/10.48550/arXiv.2105.06551
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Ehsan</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M. O.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Human-centered explainable AI</article-title>
          . https://doi.org/10.48550/arXiv.
          <year>2002</year>
          .01092
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Binder</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sommerville</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>Socio-technical systems</article-title>
          .
          <source>Interacting with Computers</source>
          ,
          <volume>23</volume>
          (
          <issue>1</issue>
          ),
          <fpage>4</fpage>
          -
          <lpage>17</lpage>
          . https://doi.org/10.1016/j.intcom.
          <year>2010</year>
          .
          <volume>07</volume>
          .003
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Hagendorff</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>The ethics of AI ethics</article-title>
          .
          <source>Minds and Machines</source>
          ,
          <volume>30</volume>
          (
          <issue>1</issue>
          ),
          <fpage>99</fpage>
          -
          <lpage>120</lpage>
          . https://doi.org/10.1007/s11023-020-09517-8
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Floridi</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cowls</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>A unified framework for AI in society</article-title>
          .
          <source>Harvard Data Science Review</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ). https://doi.org/10.1162/99608f92.8cd550d1
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Hazenberg</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zwitter</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2024</year>
          ).
          <article-title>Cybernetic governance</article-title>
          .
          <source>Ethics and Information Technology</source>
          . https://doi.org/10.1007/s10676-024-09763-9
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Burton-Jones</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Grange</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Powers of action in socio-technical systems</article-title>
          .
          <source>Journal of Management Information Systems</source>
          ,
          <volume>30</volume>
          (
          <issue>4</issue>
          ),
          <fpage>13</fpage>
          -
          <lpage>48</lpage>
          . https://doi.org/10.2753/MIS0742-
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Burton-Jones</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Straub</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Reconceptualizing system usage</article-title>
          .
          <source>Information Systems Research</source>
          ,
          <volume>17</volume>
          (
          <issue>3</issue>
          ),
          <fpage>228</fpage>
          -
          <lpage>246</lpage>
          . https://doi.org/10.1287/isre.1060.0096
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>