<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Lightweight AI Governance (LAIG) Framework for SMEs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aleksander Młodawski</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aleksandra Wolniak</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kozminski University</institution>
          ,
          <addr-line>Jagiellońska 59, 03-301 Warsaw</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Small and medium sized enterprises (SMEs) increasingly deploy artificial intelligence solutions, yet they must comply with the European Union's Artificial Intelligence Act. High-risk systems, such as credit scoring tools or automated hiring screeners, must meet stringent requirements for risk management, technical documentation, human oversight, and post-market monitoring. Medium and low-risk systems also require ongoing observation to verify that their initial classification remains accurate and does not escalate. Although legislators introduced proportionality measures for smaller businesses, recent analyses suggest that implementation costs remain prohibitive for SMEs lacking in-house compliance staf. Commission estimates indicate that a 50-person start-up could incur roughly EUR 216 000-319 000 in first-year compliance costs for a single AI system. To close this gap, we propose the Lightweight AI Governance (LAIG) framework, a pragmatic, risk-based governance model expressly designed for SMEs. LAIG distils best practices from ISO/IEC 42001 and the NIST AI Risk Management Framework into modular procedures that can be embedded in familiar DevOps workflows and maintained with limited resources. Core elements include clear role assignment, inventory-centred risk classification, checklistdriven impact assessments, targeted mitigation controls, and concise Markdown and Git documentation templates optionally drafted with large language model assistance and always subject to human verification. An illustrative ifntech scenario demonstrates how a company with forty employees can address bias, transparency, and oversight obligations without hiring a dedicated compliance department. By lowering the organisational and financial thresholds for trustworthy AI compliance, LAIG empowers European SMEs to continue innovating while satisfying both the letter and the spirit of the AI Act.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI governance</kwd>
        <kwd>SMEs</kwd>
        <kwd>EU Artificial Intelligence Act</kwd>
        <kwd>risk management</kwd>
        <kwd>TRUST-AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial intelligence now reaches deeply into everyday operations of European small and medium sized
enterprises, yet most of these firms still operate without formal governance structures. The absence
of clear oversight exposes them to legal, ethical, and reputational risk. During the past four years
the share of companies with fewer than 250 employees experimenting with AI doubled, according to
Eurostat [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], while the European Union adopted Regulation (EU) 2024/1689, the “Artificial Intelligence
Act” (AIA), which places high-risk systems such as credit-scoring engines, résumé screeners, and
medical-triage tools under demanding rules covering risk management, data quality, technical
documentation, human oversight, transparency, robustness, cybersecurity, and post-market monitoring [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
The regulation promises simplified forms, fee reductions, and sandbox access for smaller businesses, but
the Commission’s impact assessment estimates that a 50-person enterprise would incur approximately
EUR 216 000–319 000 in first-year compliance costs for a single AI system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. For resource-constrained
teams these figures represent an existential barrier.
      </p>
      <p>
        Voluntary standards have emerged in parallel. ISO/IEC 42001 sets out a Plan–Do–Check–Act
management system that presumes enterprise-level resources [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], while the NIST AI Risk Management
Framework (AI RMF) encourages an iterative Govern–Map–Measure–Manage cycle and likewise
assumes dedicated compliance staf [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Transparency artefacts such as Model Cards [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and Datasheets
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] enhance documentation yet address only part of the legal obligations and require multidisciplinary
expertise. Consequently, SMEs confront a patchwork of ambitious frameworks without a practical
roadmap, and the gap between guidance and daily engineering practice continues to widen. While
some web-based compliance tools exist, they yield static reports external to development workflows
and therefore ofer limited help to fast-paced SME teams.
      </p>
      <p>In response, we present the Lightweight AI Governance framework, an approach tailored to
Gitcentred development teams that converts every clause of Annex IV into modular checklists, Markdown
templates, and automated gap analysis scripts. The framework builds on lessons from agile software
engineering and aligns compliance checkpoints with familiar commit and review rituals. By grounding
governance in existing developer workflows, we aim to minimise friction while maximising traceability.
Our study therefore investigates whether the Act’s obligations can be divided into tasks suited to limited
resources, which parts of ISO/IEC 42001 and the NIST framework remain essential, how governance
artefacts can coexist with source code in the same repository and continuous integration pipeline, and
whether the resulting workflow can reduce compliance overhead while preserving auditability and the
core principles of trustworthy AI. Unlike standalone checklist tools, LAIG integrates compliance steps
directly into DevOps practices and automates verification, which we hypothesise can reduce overhead
without compromising rigour.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        Scholars and standard setters agree that trustworthy AI demands both technical safeguards and
organisational controls, yet their guidance difers in level of detail, which complicates implementation for smaller
ifrms. The European AI Act is the most comprehensive legal instrument to date. Chapter III mandates
risk management, data governance, documentation, transparency, accuracy, robustness, cybersecurity,
and human oversight for every high-risk system, while Annex IV lists the technical documentation
required to demonstrate conformity [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Voluntary frameworks follow a similar direction. The NIST
AI RMF maps trustworthy AI across four lifecycle functions—Govern, Map, Measure, and Manage
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. ISO/IEC 42001 supplies a certifiable management system that embeds ethics, risk assessment, and
continuous improvement through the Plan–Do–Check–Act cycle [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. These eforts are supported by
transparency artefacts including Model Cards that summarise intended use, datasets, metrics, and
limitations [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], Datasheets that document data provenance and quality [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and AI FactSheets that adopt
a supplier’s declaration approach [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. A growing set of national and international policy initiatives
is tracked by the OECD.AI Policy Observatory [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. With respect to cybersecurity assurance, the EU
adopted the EUCC certification scheme in January 2024 [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Regulatory approaches draw on these
artefacts, although their production requires time and multidisciplinary expertise that typical SMEs do
not possess [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>
        Focused research on SME governance remains scarce. Analyses warn that the Act’s proportionality
measures may prove illusory without practical templates, shared tools, and subsidised audits [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Legal
analyses clarify the structure and implications of the Act for SMEs, yet practical, publicly documented
workflows remain limited [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Moreover, surveys confirm that small firms see standards as valuable
yet overwhelming and therefore postpone governance until late in development when changes cost
more [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Recent studies on DevOps pipelines demonstrate that embedding risk management directly
into CI/CD loops improves defect detection, reduces incident response times, and ensures continuous
evidence generation [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Similarly, AI-driven test case optimisation integrated into CI/CD reduces
redundant testing and accelerates compliance-relevant validation cycles [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. No open workflow translates
Annex IV obligations and leading standards into proportional, low-overhead practices suitable for lean
engineering teams. LAIG stores governance artefacts in the same repository as the code and enforces
coverage through continuous integration, avoiding separate platforms. The framework therefore aims
to close this gap by coupling modular documentation templates with risk-tiered governance steps
that integrate naturally into DevOps pipelines and deliver trustworthy AI without enterprise-level
bureaucracy.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed methodology</title>
      <p>
        The LAIG framework rests on two intertwined investigations. A clause-by-clause reading of the Artificial
Intelligence Act supplied an exhaustive inventory of legal obligations, while a matching exercise linked
each obligation with the management functions defined in ISO/IEC 42001 and the NIST AI RMF [
        <xref ref-type="bibr" rid="ref2 ref4 ref5">2, 4, 5</xref>
        ].
Complementing this top-down study, we conducted semi-structured interviews with five Polish startups,
each employing between ten and sixty staf. The interviews followed a short protocol with informed
consent and anonymisation, and the transcripts were analysed using template-based coding. Together
these strands revealed both what SMEs must do and what they can realistically sustain, and produced a
practical mapping table that guided the artefacts and checks implemented in LAIG within developer
workflows.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Design objectives</title>
        <p>Interview data showed that small product teams weigh governance steps against tight sprints, quarterly
burn rates, and investor deadlines. In response the framework commits to six principles. First, every
activity must create compliance evidence or mitigate risk in a way that exceeds its efort cost. Second,
the workload is sliced into small autonomous tasks so that a developer can complete a compliance item
between ordinary feature tickets. Third, traceability is guaranteed because every file carries explicit tags
that point to one or more Annex IV clauses and the entire history lives in Git, which ofers dif-based
evidence for auditors. Fourth, the architecture embeds governance artefacts in the same repository,
continuous integration system, and code review flow already familiar to engineers, so no one needs to
log into a second platform. When a separate repository is unavoidable, a release build in the product
repository must reference a signed governance commit, and the build fails if that commit does not pass
coverage checks. Fifth, language models may propose text, yet a human must read, approve, and merge
the content, which reduces the risk of hallucination and keeps accountability clear. Sixth, the framework
relies on familiar tools, including Markdown editors, Git command line utilities, and spreadsheets,
so adoption does not mandate new software procurement. These principles aim to preserve velocity,
protect product quality, and satisfy regulators without turning a ten-person team into a paperwork
factory.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Repository architecture</title>
        <p>LAIG treats documentation as source material. A dedicated repository contains seven Markdown files
that mirror the order of Annex IV, beginning with system description and ending with post-market
monitoring. Each file opens with a YAML header that records system identifier, version, and author,
then continues with section text marked by short comment tags that reference clause numbers. A
lightweight linter runs on every pull request to highlight any tag that still lacks narrative. Because each
commit message must quote the clause identifier, reviewers can follow the compliance trail from first
model prototype to production deployment with no separate log.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Workflow</title>
        <p>Daily operation unfolds in five human-controlled stages. A developer completes a short Markdown
intake form that captures purpose, stakeholders, data sources, and maturity stage together with intended
use, decision context, afected users, input data origin, model family, oversight thresholds, and the
expected deployment pathway. A helper script transforms the answers into prompts that reference the
legal clauses, so authors do not start from a blank page. A language model drafts text and inserts VERIFY
tokens wherever confidence is low. A subject matter expert resolves tokens, edits numbers, checks
links, and merges the pull request, which locks the draft into Git history. A continuous integration
job then renders the repository to PDF and, optionally, to DOCX, so the latest technical file is always
downloadable in a single click. The same job applies two gates. First, all applicable Annex IV items are
complete or explicitly marked not applicable with a justification in one sentence, and no VERIFY tokens
remain. Second, if any condition fails, the release is blocked and a remediation ticket is opened. When
governance and product live in diferent repositories, the product build reads the referenced governance
commit and blocks the release whenever the gates are not satisfied. Folding the entire loop into the
familiar pull request rhythm converts compliance from an end-of-project silo into an everyday habit.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Clause mapping mechanism</title>
        <p>
          To operationalise the principle that every action must generate evidence or mitigation, the governance
repository is linked to the implementation repository through a shared CI/CD pipeline [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. Each
change to the source code follows a merge request procedure in which approval depends on completing
a checklist that references the relevant clauses of Annex IV. During merging, an automatic compliance
linter verifies the presence and consistency of compliance metadata. If the required documentation or
mitigating measures are missing, the system blocks integration with the main branch. This solution
eliminates the risk of purely declarative treatment of obligations and ensures that the compliance
evidence trail is created in parallel with software development.
        </p>
        <p>The clause mapping table pairs every Annex IV requirement with a heading path and a checkbox. If
a requirement is irrelevant, such as photographs of a physical device for a pure software product, the
author marks the row not applicable and writes a one-sentence justification. Because the table itself
is version-controlled, auditors can review how coverage improves over time and developers can run
automated dif checks to catch accidental deletions. Closely related sub-requirements are grouped to
reduce boilerplate, and each group links to concrete artefacts and tests. This repository-first mapping
contrasts with questionnaire tools because it is versioned alongside the code and enforced through
automated checks during pull requests and releases.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Illustrative scenario</title>
      <p>FinPay is a hypothetical fintech based in Warsaw that employs forty people and provides AI-driven credit
scoring services. As creditworthiness assessment falls within Annex III of the AI Act as a high-risk use,
FinPay must demonstrate comprehensive risk management, maintain technical documentation, ensure
human oversight, transparency, robustness, and cybersecurity, and implement post-market monitoring.
Before adopting LAIG, compliance evidence was scattered across unversioned documents and ad
hoc spreadsheets, hindering audit readiness, obscuring accountability, and undermining traceability.
Adoption proceeded in three compact sprints that embedded governance in everyday engineering.
First, the team created a dedicated governance repository, assigned named responsibility for each
relevant legal clause, and completed an intake questionnaire capturing purpose, stakeholders, data
lineage, model family, oversight thresholds, and the intended deployment path. A machine-readable
clause map linked Annex IV requirements to specific document headings and checkboxes, ensuring
that each item was either completed or explicitly marked as not applicable with justification. Next,
developers and data scientists drafted seven Markdown sections mirroring Annex IV using language
model assistance for first drafts with explicit verification markers, while legal and risk specialists
resolved markers, corrected figures, validated links, and tightened claims. In parallel, the modelling
team addressed dataset imbalance with balanced resampling and recorded the rationale for calibration
and fairness thresholds alongside the data governance narrative. Finally, engineers implemented
continuous integration that automatically renders the repository into a technical file with each change
and enforces two release gates: full clause coverage with no unresolved verification markers, and a
signed governance commit referenced by the product build. If either gate fails, the release is blocked
and a remediation ticket is generated. The first end-to-end execution of this workflow produced a
seventeen-page technical file, ready for engagement with a regulatory sandbox and for supporting early
dialogue with assessors. Developers reported that handling governance artefacts through pull requests
felt natural, while managers valued the living audit trail in Git and the clear line from requirement to
mitigation. Although formal time and cost measurements are forthcoming, the team reports higher
confidence and improved readiness for its first conformity assessment under the AIA.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>The LAIG framework gives small enterprises a practical route to trustworthy AI by folding risk checks,
human oversight, and continuous monitoring into the Git-based routines developers already follow.
This arrangement addresses Annex IV documentation duties and supports the operationalisation of
key Chapter III obligations within development workflows, while creating a minimum viable Plan–Do–
Check–Act loop that aligns with the Govern and Manage functions of the NIST AI RMF. By slicing
Annex IV into bite-sized tasks, the method lets teams focus on the highest risks first, a need often
voiced in founder interviews. Governance files sit beside source code so engineers can move from
feature branch to compliance update without changing context, and each commit records a searchable
audit trail that managers and examiners value. Automated gap tests flag missing clauses the moment a
pull request appears, and language models generate draft text that experts refine, a pairing that speeds
writing yet keeps the final judgement human.</p>
      <p>These gains come with caveats. The simplified nature of the framework, which is its main advantage
for SMEs, simultaneously creates risks that must be managed. LAIG in its basic form is best suited for
low- and moderate-risk systems. For AI systems classified as high-risk under the AI Act, such as tools for
credit scoring or recruitment screening, a simplified governance approach alone may prove insuficient.
SMEs implementing such systems will need to invest in more rigorous control, risk management,
and post-market monitoring mechanisms in line with the strict requirements of the regulation. In
this context, LAIG should be interpreted as a transitional instrument of proportional governance. Its
role is not to replace comprehensive conformity assessment procedures prescribed for high-risk AI
systems, but rather to enable SMEs to initiate systematic preparation of Annex IV documentation
within their existing development workflows. The artefacts created in this process—Git-based audit
trails, clause-referenced checklists, and modular records—constitute a preparatory layer of evidence
that can be expanded into the full compliance structures required under ISO/IEC 42001 or NIST-aligned
frameworks and presented to notified bodies during formal assessments.</p>
      <p>
        A lightweight approach omits some formalities, which can lead to overlooking hidden issues such
as undiscovered bias in training data or security vulnerabilities [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Although LAIG promotes bias
mitigation through balanced resampling, its efectiveness depends on team diligence. There is also a risk
that automated templates and checklists will be treated as a formality rather than an opportunity for
critical risk assessment. The framework therefore depends on an organisational culture that supports
critical evaluation of model outputs and continuous interrogation of compliance artefacts [
        <xref ref-type="bibr" rid="ref11 ref16">11, 16</xref>
        ]. By
embedding governance routines directly into software engineering processes rather than relegating
them to a separate compliance silo, LAIG fosters incremental institutional capacity-building. This
incrementalism is consistent with the proportionality principle embedded in the AI Act and ensures
that organisations can scale governance maturity progressively without disruptive restructuring when
high-risk classification applies.
      </p>
      <p>
        Using language models to assist with documentation speeds drafting but introduces concerns about
data confidentiality and the accuracy of generated content. Although the framework requires human
verification, the risk associated with sending sensitive information to external service providers remains
significant. The FinPay case remains illustrative rather than empirical, hardware-rich products will
require extra safety documentation, and sector-specific laws may layer additional duties on top of the AI
Act. Future pilot deployments are expected to yield empirical data on time and cost eficiency, while the
development of open-source tools for automated clause verification and dossier generation, combined
with cooperation with notified bodies, may support the emergence of a “light audit track” in which
the repository itself serves as primary evidence [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. Accordingly, LAIG should be regarded not as a
substitute for the comprehensive compliance architecture required under European Union law, but
as an instrument of progressive compliance that operationalises the transition from minimum viable
governance to the full spectrum of obligations triggered by deployment of high-risk systems.
      </p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>The European AI Act asks SMEs to deliver governance that rivals large corporations even though
their budgets are far smaller. The LAIG framework responds by turning every Annex IV duty, along
with the intent of ISO/IEC 42001 and the NIST AI RMF, into modular routines that fit naturally within
existing engineering practice. Documentation resides alongside source code in a version-controlled
repository, and every change generates verifiable evidence that can be independently reviewed. Clause
coverage is enforced through structured checklists and machine-readable mappings, ensuring that each
requirement is either fulfilled or explicitly marked as not applicable with justification. Automated CI/CD
gates prevent releases in which required artefacts or reviews are incomplete. Optional language model
assistance accelerates drafting but human approval remains decisive, which preserves accountability
and limits the risk of error. Teams define acceptance thresholds for accuracy, calibration, and fairness
in line with the system’s risk profile and track them across releases. In this way LAIG establishes a
disciplined cycle of planning, doing, checking, and improving that protects developer velocity while
creating a durable audit trail.</p>
      <p>Next steps are empirical and collaborative. Planned pilots in finance and health analytics will measure
documentation efort, auditor revision cycles, developer usability, and outcome quality including
calibration and fairness. Findings will feed improvements to templates, coverage checks, and acceptance
criteria and will inform engagement with regulators and notified bodies toward a credible light audit
path in which a well-maintained repository can serve as primary evidence of compliance. If the expected
gains are confirmed, LAIG will lower the organisational and financial threshold for trustworthy AI and
help European SMEs sustain innovation while meeting both the letter and the spirit of the Act.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT to generate Figures 1 and 2. After using
this tool, the authors reviewed and edited the content as needed and take full responsibility for the
publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Eurostat</surname>
          </string-name>
          .
          <article-title>Use of artificial intelligence in enterprises</article-title>
          .
          <source>Statistics Explained. Data extracted January</source>
          <year>2025</year>
          , planned
          <issue>update</issue>
          <year>January 2026</year>
          . Available at: ec.europa.eu/eurostat/statistics-explained.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>European</given-names>
            <surname>Parliament</surname>
          </string-name>
          and Council.
          <source>Regulation (EU)</source>
          <year>2024</year>
          /
          <fpage>1689</fpage>
          - Artificial Intelligence Act.
          <source>Oficial Journal of the European Union</source>
          ,
          <year>2024</year>
          . Available at: eur-lex.europa.eu/eli/reg/2024/1689/oj.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission (DG CONNECT</surname>
          </string-name>
          et al.).
          <article-title>Study to support an impact assessment of regulatory requirements for Artificial Intelligence in Europe - Final report</article-title>
          .
          <source>Publications Ofice of the European Union</source>
          ,
          <year>2021</year>
          . Available at: artificialintelligenceact.eu/AIA-COM-Impact-Assessment.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>[4] ISO/IEC</article-title>
          . ISO/IEC 42001:
          <fpage>2023</fpage>
          - Artificial intelligence - Management system - Requirements. International Organization for Standardization,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] National Institute of Standards and Technology</article-title>
          .
          <source>AI Risk Management Framework (AI RMF 1.0)</source>
          . NIST,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .6028/NIST.
          <source>AI</source>
          .
          <volume>100</volume>
          -
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Margaret</given-names>
            <surname>Mitchell</surname>
          </string-name>
          , Simone Wu,
          <string-name>
            <given-names>Andrew</given-names>
            <surname>Zaldivar</surname>
          </string-name>
          ,
          <article-title>and colleagues. Model Cards for Model Reporting</article-title>
          .
          <source>In Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAccT '19)</source>
          , pages
          <fpage>220</fpage>
          -
          <lpage>229</lpage>
          . ACM,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1145/3287560.3287596.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Timnit</given-names>
            <surname>Gebru</surname>
          </string-name>
          , Jamie Morgenstern, Briana Vecchione,
          <article-title>and colleagues</article-title>
          .
          <source>Datasheets for Datasets. Communications of the ACM</source>
          ,
          <volume>64</volume>
          (
          <issue>12</issue>
          ):
          <fpage>86</fpage>
          -
          <lpage>92</lpage>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1145/3458723.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Rachel</surname>
            <given-names>K. E.</given-names>
          </string-name>
          <string-name>
            <surname>Bellamy</surname>
          </string-name>
          ,
          <string-name>
            <surname>Kush R. Dey</surname>
            ,
            <given-names>Michael</given-names>
          </string-name>
          <string-name>
            <surname>Hind</surname>
          </string-name>
          ,
          <article-title>and colleagues</article-title>
          .
          <source>AI</source>
          FactSheets:
          <article-title>Increasing trust in AI services through supplier's declarations of conformity</article-title>
          .
          <source>IBM Journal of Research and Development</source>
          ,
          <volume>63</volume>
          (
          <issue>4</issue>
          /5):6:
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          :
          <fpage>13</fpage>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1147/JRD.
          <year>2019</year>
          .
          <volume>2942288</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>OECD.AI</given-names>
            <surname>Policy</surname>
          </string-name>
          <article-title>Observatory</article-title>
          .
          <article-title>Policies &amp; initiatives - global navigator (overview)</article-title>
          .
          <source>Living repository of national and international AI policies</source>
          ,
          <year>2024</year>
          -
          <fpage>2025</fpage>
          . Available at: oecd.ai/en/dashboards/overview.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <article-title>European Union Agency for Cybersecurity (ENISA). An EU Prime! The EU adopts the first cybersecurity certification scheme (EUCC)</article-title>
          .
          <source>News release, 31 January</source>
          <year>2024</year>
          . Available at: enisa.europa.eu/news/EUCC.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Mehdi</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Soudi</surname>
            and
            <given-names>Michel</given-names>
          </string-name>
          <string-name>
            <surname>Bauters</surname>
          </string-name>
          .
          <article-title>AI guidelines and ethical readiness inside SMEs</article-title>
          .
          <source>Digital Society</source>
          ,
          <volume>3</volume>
          :
          <fpage>49</fpage>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .1007/s44206-024-00149-9.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Veale</surname>
          </string-name>
          and
          <article-title>Frederik Zuiderveen Borgesius</article-title>
          .
          <source>Demystifying the Draft EU Artificial Intelligence Act</source>
          . Computer Law Review International,
          <volume>22</volume>
          (
          <issue>4</issue>
          ):
          <fpage>97</fpage>
          -
          <lpage>112</lpage>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .9785/cri-2021-220402.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Pavan</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Katikireddi</surname>
          </string-name>
          .
          <article-title>Smart risk management in DevOps using AI</article-title>
          .
          <source>International Journal of Scientific Research in Science and Technology</source>
          ,
          <volume>10</volume>
          (
          <issue>3</issue>
          ):
          <fpage>1248</fpage>
          -
          <lpage>1253</lpage>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .32628/IJSRST523103169.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Alice</given-names>
            <surname>John</surname>
          </string-name>
          , Isaac John, and Tiberius Dion.
          <article-title>Integrating AI-driven test case optimization into CI/CD pipelines</article-title>
          . SSRN preprint, May
          <year>2025</year>
          . doi:
          <volume>10</volume>
          .2139/ssrn.5252630.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Patrick</surname>
            <given-names>Guldimann</given-names>
          </string-name>
          , Anton Spiridonov, Ralf Staab,
          <article-title>and colleagues</article-title>
          .
          <string-name>
            <surname>COMPL-AI Framework</surname>
          </string-name>
          :
          <article-title>A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act</article-title>
          .
          <source>arXiv preprint arXiv:2410.07959</source>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2410.07959.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Manikandan</surname>
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Krishnamoorthy</surname>
          </string-name>
          .
          <article-title>Meta-Sealing: A revolutionizing integrity assurance protocol for transparent, tamperproof, and trustworthy AI systems</article-title>
          .
          <source>arXiv preprint arXiv:2411.00069</source>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2411.00069.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>