<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>A. Fitz-Gerald);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Governing AI in Education: A Cross-Organization Analysis of International Policy, Law, and Standards⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmytro Chumachenko</string-name>
          <email>dichumachenko@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff5">5</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ann Fitz-Gerald</string-name>
          <email>afitz-gerald@balsillieschool.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff6">6</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Artem Artyukhov</string-name>
          <email>a.artyukhov@pohnp.sumdu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhiy Lyeonov</string-name>
          <email>serhiy.lyeonov@polsl.pl</email>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Balsillie School of International Affairs</institution>
          ,
          <addr-line>D67 Erb str. W., N2L 6C2, Waterloo, ON</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Bratislava University of Economics and Business</institution>
          ,
          <addr-line>Dolnozemská cesta 1/b, 852 35 Petržalka</addr-line>
          ,
          <country country="SK">Slovakia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Aerospace University “Kharkiv Aviation Institute”</institution>
          ,
          <addr-line>Vadym Manko str., 17, 61070 Kharkiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Silesian University of Technology</institution>
          ,
          <addr-line>ul. Akademicka 2A, 44-100 Gliwice</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Sumy State University</institution>
          ,
          <addr-line>Kharkivska str., 116, 40000 Sumy</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff5">
          <label>5</label>
          <institution>University of Warwick</institution>
          ,
          <addr-line>CV4 7AL, Coventry</addr-line>
          ,
          <country country="UK">United Kingdom</country>
        </aff>
        <aff id="aff6">
          <label>6</label>
          <institution>Wilfrid Laurier University</institution>
          ,
          <addr-line>75 University Ave., N2L 3C5, Waterloo, ON</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Generative AI is quickly diffusing in schools and universities, prompting international organizations to issue sectoral guidance, binding rules, and technical standards to steer safe and equitable use. However, these instruments vary in scope and strength, creating a need for a cross-organization synthesis focused on education. We conducted a document-based comparative policy analysis of publicly available instruments issued by UNESCO, OECD, UNICEF, the Council of Europe, the European Union, the World Bank, and standards bodies (ISO/IEC; CEN-CENELEC). The corpus included education guidance, crosssector principles, binding law (EU AI Act; CoE convention), and operational standards (ISO/IEC 42001; ISO/IEC 23894). Instruments were coded against standard governance dimensions and cross-walked to relevant standards. We find broad convergence on human-centred, rights-based aims and safeguards for transparency, accountability, and children's data. Persistent gaps include education-specific indicators for monitoring and implementation support for low-resource contexts. International AI policy for education is consolidating into a layered model: sectoral guidance (UNESCO/OECD/UNICEF), binding rights-based law (CoE; EU AI Act), and standards-led operationalisation. Scientifically, the synthesis links normative principles to enforceable obligations and auditable practices. It supports near-term steps for ministries and institutions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI governance</kwd>
        <kwd>AI policy</kwd>
        <kwd>education policy</kwd>
        <kwd>AI in education 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Artificial intelligence (AI), especially generative AI, has diffused across countries with unusual
speed, reaching hundreds of economies and altering practices in schools and universities. Recent
cross-country evidence shows rapid global uptake, with usage patterns skewed toward younger
and more educated users but strong adoption in many middle-income economies [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This
underscores opportunity and uneven access. At the same time, major reviews of technology in
education caution that evidence of impact is mixed and context dependent, and call for stronger
governance to ensure equity and effectiveness [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. These trends make policy guidance for AI in
education urgent and consequential.
      </p>
      <p>
        International organizations (IOs) have begun to play a central role in setting common
expectations for trustworthy AI in education. UNESCO has issued global guidance for generative
AI in education, building on earlier sector-specific instruments [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The OECD’s intergovernmental
AI Principles provide a cross-sector foundation for human-centred and trustworthy AI [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. UNICEF
has articulated child-rights requirements relevant to schooling [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The Council of Europe (CoE)
opened for signature the first international, legally binding AI treaty grounded in human rights [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
The European Union (EU) adopted the AI Act, a comprehensive, risk-based regulatory framework
relevant to education providers and vendors [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In parallel, international standards bodies have
introduced management and risk frameworks (ISO/IEC 42001, ISO/IEC 23894) that many systems
and suppliers can use to operationalize governance [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ].
      </p>
      <p>
        Education policy debates focus on concrete risks and safeguards. These risks and safeguards
include the protection of children’s data and well-being, academic integrity and assessment,
transparency of automated decisions, and the capacity of teachers and institutions to use AI
responsibly. Recent instruments address these concerns from different angles, for example, the EU
AI Act prohibits emotion-recognition systems in educational settings [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], UNESCO’s guidance
outlines near-term actions for assessment and integrity [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and UNICEF’s policy guidance centers
on child rights considerations for profiling and data use [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        The landscape remains fragmented across binding regulation, soft law guidance, and voluntary
standards, with uneven specificity for classroom practice, procurement, evaluation, and support for
low-resource contexts [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. IOs highlight the need to link governance with implementation
support, funding, capacity-building, and practical toolkits, so that policies translate into improved
teaching and learning rather than technology-first adoption. Standards such as ISO/IEC 42001 and
23894 can help organizations operationalize risk management and continuous improvement, but
the availability of resources and evidence still varies widely across systems.
      </p>
      <p>This paper addresses these gaps by systematically analyzing publicly available documents
issued by international organizations that shape AI policy in education. It maps convergence and
divergence across instruments, identifies coverage gaps, and considers how cross-sector
frameworks, education-specific guidance, and international standards can be aligned to support
equitable, evidence-informed adoption.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Materials and Methods</title>
      <p>We conducted a document-based comparative policy analysis of publicly available instruments on
AI in education issued by IOs. The approach combines a scoping-review style search and screening
process to build the corpus, qualitative content analysis, and structured comparative coding. This
design is appropriate for synthesizing heterogeneous policy texts and mapping areas of
convergence, divergence, and gaps.</p>
      <p>We included instruments produced or sponsored by: UNESCO, OECD, CoE, EU, UNICEF, the
World Bank, and ISO/IEC JTC 1/SC 42. We targeted education-specific guidance, cross-sector AI
principles, legally binding instruments, child-rights policy guidance, system-readiness and
governance perspectives, and AI management/risk standards within these bodies.</p>
      <p>To capture the current governance baseline, we limited inclusion to documents published or last
updated between 2018 and September 2025. English-language versions were prioritized, and we
used the English text for coding where multiple official languages existed.</p>
      <p>We ran structured searches across official IO domains and open repositories using combinations
of terms such as “AI AND education policy,” “guidance,” “framework,” “treaty,” and “standard.” We
retrieved documents directly from authoritative pages. We also captured World Bank briefs and
reports on system-level governance and implementation.</p>
      <p>Included items were produced or endorsed by the target IOs, addressed AI uses in education or
contained cross-sector provisions with clear implications for education systems, and were in the
form of final texts, official drafts, or formally adopted standards. We excluded news articles,
opinion pieces, vendor white papers, and items without public access. Screening proceeded in two
stages (title/abstract/webpage, then full-text).</p>
      <p>We developed an a priori codebook aligned to recurrent policy dimensions in IO instruments:
objectives and values (rights, human-centric framing), scope and definitions, risk taxonomy and
prohibitions/constraints, transparency and accountability, data governance and child rights,
assessment and academic integrity, teacher capacity and professional development, procurement,
assurance, and conformity assessment, monitoring, evaluation, and impact indicators, and
implementation supports.</p>
      <p>We constructed organization-by-dimension matrices that enable side-by-side comparisons of
coverage depth, instrument strength, and education specificity. Findings were synthesized through
constant comparison to identify convergences, divergences, and coverage gaps across IOs, with
illustrative excerpts traced to source documents. Content-analytic procedures follow established
practice for transparency and replicability.</p>
      <p>Because standards function as operational complements to policy and regulation, we coded
ISO/IEC 42001 and ISO/IEC 23894 and cross-walked their requirements to governance dimensions
and regulatory references where applicable. This allowed us to examine how standards can
operationalize IO guidance within education systems and vendors.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <p>Across the corpus of documents reviewed, IOs now agree that governing AI in education requires a
mix of binding rules, standards, and practical guidance that can be adapted to local contexts.
Collected sources let us triangulate how scope, obligations, and support mechanisms are
crystallizing for education systems. The overview of documents is presented in Table 1.</p>
      <sec id="sec-3-1">
        <title>World Bank</title>
      </sec>
      <sec id="sec-3-2">
        <title>Standards (ISO/IEC; CENCENELEC)</title>
        <p>
          There is growing alignment on both foundational concepts and on who bears responsibility. The
EU AI Act supplies formal definitions and assigns duties to “providers” and “deployers” [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]. At the
same time, UNESCO’s guidance stresses human agency, inclusion, and equity [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], and UNICEF
reframes obligations through the Convention on the Rights of the Child [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. A notable novelty is
the Act’s Article 4 requirement that organizations ensure a “sufficient level” of AI literacy for staff
and others using systems on their behalf. This expectation fits the education sector’s need for
teacher capability rather than tool bans alone.
        </p>
        <p>
          The analysis indicates that governance instruments are stratified. OECD’s cross-country review
finds that, as of 2024, most jurisdictions relied on non-binding school or ministry guidance for
GenAI, with only a minority proposing sector-specific regulation [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. The EU has enacted
crosssector binding rules covering education use cases. The Council of Europe opened a legally binding
human-rights convention on AI to global signatories in September 2024 [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. This mix confirms a
pattern: soft guidance to shape school practice, anchored by harder horizontal law to protect rights
and set market obligations.
        </p>
        <p>Substantive priorities are remarkably consistent across organizations. OECD reports that
governments prioritize data protection and privacy, alongside accuracy/reliability,
transparency/explainability, fairness/bias mitigation, and (in many systems) worries about skill
attrition. Likewise, UNESCO emphasizes equity, human agency, and responsible data use. UNICEF
AI in Education
briefs and HE
reports (2024–25)
ISO/IEC 42001:2023
(AI management
systems); ISO/IEC
23894:2023 (AI risk
management); EU
harmonized
standards pipeline
via CEN-CENELEC
JTC 21</p>
        <p>Analytical/ System readiness,
policy briefs use cases (tutoring,
(non-binding) teacher support),</p>
        <p>LMIC perspectives
Voluntary
standards
(presumption
of conformity
once
harmonised
in EU)</p>
        <p>Operational ISO/IEC published;
scaffolding for EU harmonised
governance, risk, standards in
oversight, assurance development; JRC
in education brief summarises
providers &amp; vendor 37 activities for
products the AI Act
sets nine requirements for child-centred AI. These sources show a stable core of policy concerns
that inform institutional rules, procurement, and classroom practice.</p>
        <p>Education-specific legal risk classification is clearest in the EU. The AI Act treats several
education uses as “high-risk,” including systems for admission/assignment, grading and evaluation
(including proctoring), and other use cases that can materially influence an individual’s educational
trajectory. It also prohibits emotion-inference in educational settings. Application is phased:
prohibitions and AI-literacy duties began on 2 February 2025, obligations for general-purpose AI
and governance applied from 2 August 2025, and most remaining provisions apply as of 2 August
2026, with extended dates for certain high-risk categories. These dates create a tangible compliance
horizon for ministries, school networks, exam bodies, and vendors. The overview of EU AI Act
items connected to education is presented in Table 2.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Do not procure or deploy emotion-inference in classrooms/exams; avoid biometric scraping and sensitive biometric categorisation.</title>
      </sec>
      <sec id="sec-3-4">
        <title>Admissions/assignment, evaluation/steering</title>
        <p>(incl. automated grading), level-setting, and
proctoring fall under high-risk: require risk
mgmt, data governance, technical docs, human
oversight, logging, post-market monitoring;
registration before public-sector deployment.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Providers and deployers must ensure a sufficient level of AI literacy for staff/users. Institutions should evidence staff training and student-facing guidance.</title>
      </sec>
      <sec id="sec-3-6">
        <title>Model providers must publish training-data</title>
        <p>summaries, meet security/testing duties (more
for “systemic-risk” models). Downstream
edtech vendors and institutions should request
these disclosures from providers.</p>
        <p>Application timeline In force Aug 1, 2024 - Prohibitions &amp;
AIliteracy from Feb 2, 2025 - GPAI from Aug 2,
2025 - High-risk most duties from Aug 2, 2026
(embedded product cases to Aug 2, 2027). Plan
procurement and updates accordingly.</p>
      </sec>
      <sec id="sec-3-7">
        <title>Where it is in the Act</title>
      </sec>
      <sec id="sec-3-8">
        <title>Art. 5 prohibitions; EU</title>
        <p>summary page
confirms
education/workplace
context.</p>
        <p>EUR-Lex AI Act &amp;
Annex III overview.</p>
      </sec>
      <sec id="sec-3-9">
        <title>Article 4 text; Commission FAQ on AI literacy.</title>
      </sec>
      <sec id="sec-3-10">
        <title>EU AI Act GPAI section &amp; application timeline.</title>
      </sec>
      <sec id="sec-3-11">
        <title>Commission “AI Act” page (timeline). Standards &amp; assurance</title>
        <p>Use ISO/IEC 42001 (AIMS) and 23894 (risk
mgmt) now to prepare; watch for
CENCENELEC JTC 21 harmonized standards that
will grant presumption of conformity once
cited in the OJEU.</p>
      </sec>
      <sec id="sec-3-12">
        <title>ISO pages; JRC brief on harmonised standards; CENCENELEC overview.</title>
        <p>
          Standardization is becoming a bridge between legal requirements and school-system practice to
support implementation. ISO/IEC 42001 establishes an AI management system framework that
organizations can adopt to operationalize policies and controls. ISO/IEC 23894 provides
riskmanagement guidance across the AI lifecycle. In the EU, CEN-CENELEC JTC 21 is drafting
harmonised standards that, once cited in the Official Journal, confer a presumption of conformity
with the Act [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. JRC’s 2024 Science for Policy brief explains the expected characteristics of these
standards and how they will complement existing ISO/IEC work.
        </p>
        <p>Capacity building and AI literacy emerge as cross-cutting levers. The OECD recommends
integrating GenAI into teacher training and providing national resources that cover technical,
pedagogical, and ethical dimensions. The EU’s Article 4 obligation makes literacy a legal duty for
providers and deployers, applicable since February 2025. These policies suggest that teacher
professional learning, not point solutions, has become the policy baseline, and that institutions
should document literacy programmes as part of compliance evidence.</p>
        <p>Assessment and academic integrity are focal stress points where guidance is evolving. UNESCO
encourages redesigning assessment and coursework rather than relying on detection alone. The
OECD documents that many countries allow teacher use of GenAI and are experimenting with
restrictions around high-stakes exams while encouraging teacher training and providing exemplars
of classroom use. The overall pattern is a shift from “ban or detect” toward assessment redesign,
transparency to students, and clear exam protocols.</p>
        <p>Equity considerations are central in international guidance. UNICEF’s policy document
emphasizes non-discrimination, inclusion, and safeguards tailored to children’s rights. UNESCO’s
policymaker guidance and related background analyses highlight digital divides and potential
harms to young people if AI is deployed without attention to rights, well-being, and access. For
education systems, this translates into impact assessments that explicitly consider vulnerable
learners and safeguards in procurement and classroom deployment.</p>
        <p>There is also movement on general-purpose models that underpin many education tools. The
European Commission issued a voluntary Code of Practice for GPAI and published guidelines
clarifying obligations for GPAI providers ahead of their entry into application on 2 August 2025.
These instruments seek to make transparency, risk assessment, and incident reporting more
concrete for model providers whose systems are embedded in ed-tech products. This upstream
clarity is consequential for downstream education buyers and regulators.</p>
        <p>Despite rapid activity, evidence gaps remain. The OECD notes that policymakers still lack
reliable information on what AI can and cannot do, complicating curriculum, assessment design,
and policy calibration. Development finance institutions likewise caution that universities and
ministries face institutional-capacity challenges when integrating new tools at a pace. This
reinforces the need for iterative pilots with embedded evaluation and for research partnerships that
can inform policy revision cycles.</p>
        <p>International AI in education policy is coalescing around a rights-based, risk-based core, with
enforceable horizontal regulation (EU/CoE) increasingly complemented by sector-specific guidance
(UNESCO/OECD/UNICEF) and by management and risk-standards that operationalize day-to-day
practice. Education-relevant provisions mapped to core policy dimensions are presented in Table 3.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>
        This analysis indicates that IOs have converged on a layered governance model for AI in education
that blends rights-based principles, risk-based regulation, and operational standards. A key
strategic gain of this layering is that it can reconcile the breadth of education use cases with the
need for verifiable safeguards. Soft law instruments articulate values and good practice, while
binding law (notably in Europe) establishes enforceable duties and bans. Standards then provide
routines for implementation and audit. Contemporary governance scholarship supports this
division of labour and cautions that the value of such regimes turns on how well high-level
principles are translated into sector-specific controls and monitoring. In particular, the analyses of
generative AI governance emphasize the importance of concrete mechanisms to avoid ethics
“thinness” and enforcement gaps, an observation directly relevant to education systems adopting
general-purpose AI and assessment tools [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        The EU’s prohibition of emotion-inference in educational settings aligns with longstanding
concerns in the psychological science literature about the validity of inferring internal emotional
states from facial movements alone. The paper [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] argues that context, culture, and individual
variation undermine simple mappings from face to emotion. Newer studies underline how easily
“authentic” expressions can be simulated or misread [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In an education context, where
highstakes decisions about behaviour or performance may be at issue, this body of evidence provides a
clear rationale for bright-line restrictions. The policy reduces the risk of spurious inferences and
unequal error burdens across student groups.
      </p>
      <p>
        Other strands of affective computing research continue to report technical progress in
classroom-facing emotion recognition systems, including multimodal and real-time approaches
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The coexistence of methodological advances with validity critiques reinforces a core policy
point. Improvements in accuracy on benchmark datasets do not resolve questions about construct
validity, contextual bias, or proportionality in schools [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. For education authorities, if allowed,
the prudent course of action is to treat such systems as research pilots with strict oversight rather
than as routine instruments for assessment or discipline.
      </p>
      <p>
        Concerns about automated proctoring further illustrate why risk-based controls matter in
education. Studies document privacy anxieties, contested consent, and perceived intrusiveness, and
have synthesized evidence of potential disparate impacts and opacity in commercial tools [
        <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
        ].
These findings support regulatory requirements for risk assessment, documentation, human
oversight, and post-market monitoring when institutions procure or operate proctoring systems.
This implies shifting from ad hoc adoption to documented justifications, limited use cases, and
alternatives that reduce surveillance while protecting assessment integrity.
      </p>
      <p>
        Across the corpus, current research converges on a central pedagogical message: generative AI
weakens the reliability of many take-home text assignments as measures of individual learning,
making assessment redesign, not detection-only strategies, the sustainable response [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
Systematic reviews and empirical studies report mixed or adverse effects on perceived integrity and
authenticity when traditional formats persist, and they recommend redesigned tasks coupled with
more explicit integrity norms [
        <xref ref-type="bibr" rid="ref20 ref21">20, 21</xref>
        ]. This evidence supports the direction of recent IO guidance
but pushes further by prioritizing robust validity arguments for new assessment formats and
rigorous evaluation of their fairness and workload effects.
      </p>
      <p>
        Teacher capacity and AI literacy emerge as binding constraints on responsible adoption. Recent
studies show that many teachers and pre-service teachers lack a confident conceptual and ethical
understanding of AI systems, that literacy frameworks are uneven, and that professional
development often underestimates the pedagogical redesign required [
        <xref ref-type="bibr" rid="ref22 ref23">22-23</xref>
        ]. Where law creates
explicit literacy duties for deployers and providers, these findings imply a shift from optional
training to documented programmes with demonstrable competencies and equity safeguards [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <p>
        Standards can help bridge policy to practice, but are not a substitute for pedagogy or
contextsensitive safeguards. ISO/IEC 42001 analyses suggest that management system approaches can
improve documentation, risk routines, and accountability, which is helpful for ministries,
universities, and vendors preparing for audits [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. However, a standards-first approach can invite
“compliance minimalism” if not paired with education-specific indicators (learning, equity,
wellbeing) and external evaluation. The emerging European work on harmonised standards offers a
path to consistent technical expectations [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. However, education authorities must add
sectorspecific criteria and evidence-based plans to make classroom expectations meaningful.
      </p>
      <p>
        Equity and children’s rights provide an additional lens for interpreting the international
landscape. Studies in K-12 and child-centred design communities emphasise participation,
nondiscrimination, and the risks of transferring adult-centric models into child contexts [
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ]. This
literature supports IO calls to operationalize children’s rights in procurement and classroom use, to
involve children and families in design and evaluation, and to guard against systems that shift error
or surveillance burdens onto already disadvantaged learners. Practically, equity-aware impact
assessment and participatory evaluation should be routine for any AI affecting placement, grading,
or behavioural decisions.
      </p>
      <p>
        A limitation across peer-reviewed syntheses is the scarcity of rigorous, education-specific
indicators for monitoring AI’s effects at scale. Reviews repeatedly call for multi-level evaluation
designs, better causal inference about learning outcomes, and systematic reporting of harms and
benefits across subgroups [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. This strengthens the case for IOs and standards bodies to go beyond
principles by convening consensus indicator sets and reference evaluation protocols, with explicit
attention to validity, reliability, workload, accessibility, and student wellbeing.
      </p>
      <p>Because many educational uses of AI are embedded in general-purpose systems, governance
that treats models, applications, and institutional practices as a system will travel better across
contexts. Governance scholars argue that such systems approaches raise the odds that principles
and rules will translate into safer, more equitable practice. For IOs, this means linking rights, risk
management, and standards to concrete pedagogical and institutional routines, and supporting
member states to build evaluation and compliance capacity.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This paper shows that international AI in education governance is crystallizing into a layered
model: sectoral guidance from UNESCO, comparative evidence from the OECD, a child-rights
baseline from UNICEF, a binding human-rights treaty from the CoE, a risk-based regulatory regime
from the EU AI Act, and operational scaffolding through international and European standards.
These instruments align around human-centred and rights-respecting use while introducing
concrete controls for high-risk education applications and routes to implementation via standards.</p>
      <p>Scientifically, the paper contributes a cross-organization mapping of approaches, which links
normative principles to enforceable rules and auditable practices, and which clarifies how “soft
law” and “hard law” interact in education settings. It also documents the emerging role of
harmonized standards as a mechanism for translating legal duties into verifiable controls.</p>
      <p>Practically, the synthesis offers a near-term action framework for ministries, school systems,
and universities: identify and register high-risk education uses under the AI Act; adopt ISO/IEC
42001 and 23894 processes to prepare documentation, oversight, and risk management; and embed
child-rights and equity safeguards in procurement and classroom practice, consistent with
UNESCO and UNICEF guidance.</p>
      <p>Future research should develop and test shared indicators for learning, equity, and wellbeing to
enable longitudinal monitoring of AI’s impacts; study how general-purpose models and
forthcoming harmonized standards affect education procurement and assurance; and track uptake
and domestication of the CoE convention across diverse legal systems, including implications for
schools and higher education. Comparative, multi-country designs aligned with OECD monitoring
and UNESCO sectoral priorities would help build cumulative, policy-relevant evidence.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used GPT-5 Instant and Grammarly Edu in order
to: grammar and spelling check, text polishing. After using these services, the authors reviewed
and edited the content as needed and take(s) full responsibility for the publication’s content.</p>
      <p>Teacher PD and
workload relief
as policy priority</p>
      <sec id="sec-6-1">
        <title>Procurement, assurance &amp; conformity</title>
      </sec>
      <sec id="sec-6-2">
        <title>Monitoring &amp; evaluation</title>
      </sec>
      <sec id="sec-6-3">
        <title>Implementation</title>
        <p>supports
High-level policy
steps and
checklists
Options for
governance and
procurement</p>
      </sec>
      <sec id="sec-6-4">
        <title>Child-rights</title>
        <p>criteria for
procurement
Calls for
institutional &amp;
system
evaluation
“Immediate
actions” +
longterm policy
roadmap
Need for
indicators &amp;
monitoring of
digital/GenAI</p>
      </sec>
      <sec id="sec-6-5">
        <title>Cross-country exemplars, guidance</title>
      </sec>
      <sec id="sec-6-6">
        <title>Protect children from intrusive practices</title>
      </sec>
      <sec id="sec-6-7">
        <title>Guidance for child-facing contexts implies educator training</title>
      </sec>
      <sec id="sec-6-8">
        <title>Monitor child impacts</title>
      </sec>
      <sec id="sec-6-9">
        <title>Rights-compliant Integrity via</title>
        <p>assessment transparency &amp;
impacts bans on emotion
inference in
exams/workplace
s</p>
      </sec>
      <sec id="sec-6-10">
        <title>Public authority Article 4: AI</title>
        <p>obligations imply literacy duty for
training providers &amp;
deployers
(applies since Feb
2, 2025)</p>
      </sec>
      <sec id="sec-6-11">
        <title>Oversight &amp; remedies</title>
      </sec>
      <sec id="sec-6-12">
        <title>Treaty monitoring mechanisms</title>
      </sec>
      <sec id="sec-6-13">
        <title>Conformity assessment, registration for high-risk</title>
      </sec>
      <sec id="sec-6-14">
        <title>Post-market</title>
        <p>monitoring &amp;
incident
reporting
(highrisk &amp; GPAI
systemic risk)</p>
      </sec>
      <sec id="sec-6-15">
        <title>Practical cases</title>
        <p>and cautions
PD &amp;
institutional
capability as
enablers</p>
      </sec>
      <sec id="sec-6-16">
        <title>Procurement guidance &amp; readiness framing</title>
      </sec>
      <sec id="sec-6-17">
        <title>Emphasises evaluation in pilots &amp; scaleups</title>
        <p>Controls for
model/ system
performance,
robustness,
oversight
42001 requires
competence,
roles, and
continual
improvement</p>
      </sec>
      <sec id="sec-6-18">
        <title>Auditable</title>
        <p>management
systems (42001);
risk controls
(23894)</p>
      </sec>
      <sec id="sec-6-19">
        <title>Continuous</title>
        <p>improvement &amp;
monitoring
required under
42001</p>
      </sec>
      <sec id="sec-6-20">
        <title>Practical toolkits</title>
        <p>for child-centred
AI</p>
      </sec>
      <sec id="sec-6-21">
        <title>Briefs, use cases, LMIC perspectives</title>
      </sec>
      <sec id="sec-6-22">
        <title>Standards,</title>
        <p>certification
ecosystem</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <article-title>Who on Earth Is Using Generative AI?</article-title>
          ; World Bank,
          <year>2024</year>
          ;
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>UNESCO</surname>
          </string-name>
          <article-title>Technology in Education: A Tool or Whose Terms? Available online</article-title>
          : https://www.unesco.org/gem-report/sites/default/files/medias/fichiers/2023/07/ Summary_v5.
          <source>pdf (accessed on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>[3] UNESCO Guidance for Generative AI in Education</article-title>
          and Research Available online: https://www.unesco.org/en/articles/guidance-generative
          <article-title>-ai-education-and-research (accessed on 20</article-title>
          <year>August 2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>OECD</given-names>
            <surname>The OECD Artificial Intelligence (AI) Principles</surname>
          </string-name>
          Available online: https://oecd.ai/en/aiprinciples (accessed
          <source>on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>UNICEF</given-names>
            <surname>Policy</surname>
          </string-name>
          <article-title>Guidance on AI for Children | Innocenti Global Office of Research</article-title>
          and Foresight Available online: https://www.unicef.org/innocenti/reports/policy-guidance-aichildren
          <source>(accessed on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[6] Council of Europe The Framework Convention on Artificial Intelligence</source>
          Available online: https://www.coe.int/en/web/artificial
          <article-title>-intelligence/the-framework-convention-on-artificialintelligence (accessed on 20</article-title>
          <year>August 2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>European</given-names>
            <surname>Union Regulation -</surname>
          </string-name>
          EU -
          <year>2024</year>
          /1689 - EN
          <string-name>
            <surname>-</surname>
          </string-name>
          EUR-Lex Available online: https://eurlex.europa.eu/eli/reg/2024/1689/oj/eng (accessed
          <source>on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>International</given-names>
            <surname>Organization for Standardization</surname>
          </string-name>
          <string-name>
            <surname>ISO</surname>
          </string-name>
          /IEC 42001:2023 Available online: https://www.iso.
          <source>org/standard/42001 (accessed on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>International</given-names>
            <surname>Organization for Standardization</surname>
          </string-name>
          <string-name>
            <surname>ISO</surname>
          </string-name>
          /IEC 23894:2023 Available online: https://www.iso.org/standard/77304.html (accessed
          <source>on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Marsden</surname>
            ,
            <given-names>C.T.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Christou</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Artificially Intelligent Regulation: Global Norms</surname>
          </string-name>
          ,
          <source>International Political Economy and the Brussels Effect. IET conference proceedings. 2024</source>
          ,
          <year>2024</year>
          ,
          <fpage>95</fpage>
          -
          <lpage>97</lpage>
          , doi:https://doi.org/10.1049/icp.
          <year>2024</year>
          .
          <volume>2535</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Cenelec</given-names>
            <surname>Artificial</surname>
          </string-name>
          Intelligence Available online: https://www.cencenelec.eu/areas-of-work/cencenelec-topics/artificial-intelligence
          <source>/ (accessed on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Taeihagh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <article-title>Governance of Generative AI</article-title>
          .
          <source> Policy and Society</source>
          <year>2025</year>
          ,
          <volume>44</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          , doi:https://doi.org/10.1093/polsoc/puaf001.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Barrett</surname>
            ,
            <given-names>L.F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Adolphs</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ; Marsella,
          <string-name>
            <given-names>S.</given-names>
            ;
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.M.</given-names>
            ;
            <surname>Pollak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.D.</given-names>
            <surname>Emotional Expressions</surname>
          </string-name>
          <article-title>Reconsidered: Challenges to Inferring Emotion from Human Facial Movements</article-title>
          .
          <source>Psychological Science in the Public Interest</source>
          <year>2019</year>
          ,
          <volume>20</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>68</lpage>
          , doi:https://doi.org/10.1177/1529100619832930.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Zloteanu</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Krumhuber</surname>
            ,
            <given-names>E.G.</given-names>
          </string-name>
          <string-name>
            <surname>Expression Authenticity</surname>
          </string-name>
          :
          <article-title>The Role of Genuine and Deliberate Displays in Emotion Perception</article-title>
          .
          <source>Frontiers in Psychology</source>
          <year>2021</year>
          ,
          <volume>11</volume>
          , 611248, doi:https://doi.org/10.3389/fpsyg.
          <year>2020</year>
          .
          <volume>611248</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Avital</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Egel</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ; Weinstock,
          <string-name>
            <given-names>I.</given-names>
            ;
            <surname>Malka</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>Enhancing Real-Time Emotion Recognition in Classroom Environments Using Convolutional Neural Networks: A Step towards Optical Neural Networks for Advanced Data Processing</article-title>
          .
          <source>Inventions</source>
          <year>2024</year>
          ,
          <volume>9</volume>
          , 113, doi:https://doi.org/10.3390/inventions9060113.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Ba</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <article-title>Measuring Emotions in Education Using Wearable Devices: A Systematic Review</article-title>
          .
          <source>Computers &amp; Education</source>
          <year>2023</year>
          ,
          <volume>200</volume>
          , 104797, doi:https://doi.org/10.1016/j.compedu.
          <year>2023</year>
          .
          <volume>104797</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Mutimukwe</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Viberg</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>McGrath</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Cerratto-Pargman</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <article-title>Privacy in Online Proctoring Systems in Higher Education: Stakeholders' Perceptions, Awareness and Responsibility</article-title>
          .
          <source>Journal of Computing in Higher Education</source>
          <year>2025</year>
          , doi:https://doi.org/10.1007/s12528-025-09461- 5.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Yoder-Himes</surname>
            ,
            <given-names>D.R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Asif</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Kinney</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Brandt</surname>
            , T.J.; Cecil,
            <given-names>R.E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Himes</surname>
            ,
            <given-names>P.R.</given-names>
          </string-name>
          ; Cashon,
          <string-name>
            <given-names>C.</given-names>
            ;
            <surname>Hopp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.M.P.</given-names>
            ;
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Racial</surname>
          </string-name>
          , Skin Tone, and Sex Disparities in
          <source>Automated Proctoring Software. Frontiers in Education</source>
          <year>2022</year>
          ,
          <volume>7</volume>
          , 881449, doi:https://doi.org/10.3389/feduc.
          <year>2022</year>
          .
          <volume>881449</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Kizilcec</surname>
            ,
            <given-names>R.F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Huber</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Papanastasiou</surname>
            ,
            <given-names>E.C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Cram</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Makridis</surname>
            ,
            <given-names>C.A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Smolansky</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Zeivots</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Raduescu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <article-title>Perceived Impact of Generative AI on Assessments: Comparing Educator and Student Perspectives in Australia, Cyprus, and the United States</article-title>
          .
          <source>Computers and Education Artificial Intelligence</source>
          <year>2024</year>
          ,
          <volume>7</volume>
          , 100269, doih:ttps://doi.org/10.1016/j.caeai.
          <year>2024</year>
          .
          <volume>100269</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Xia</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Weng</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ouyang</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>T.J.</given-names>
          </string-name>
          ; Chiu,
          <string-name>
            <surname>T.K.F. A Scoping</surname>
          </string-name>
          <article-title>Review on How Generative Artificial Intelligence Transforms Assessment in Higher Education</article-title>
          .
          <source>International journal of educational technology in higher education</source>
          <year>2024</year>
          ,
          <volume>21</volume>
          , 40, doi:https://doi.org/10.1186/s41239- 024-00468-z.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Khlaif</surname>
            ,
            <given-names>Z.N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Alkouk</surname>
            ,
            <given-names>W.A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Salama</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Eideh</surname>
            ,
            <given-names>B.A.</given-names>
          </string-name>
          <string-name>
            <surname>Redesigning</surname>
          </string-name>
          <article-title>Assessments for Ai-Enhanced Learning: A Framework for Educators in the Generative AI Era</article-title>
          .
          <source>Education Sciences</source>
          <year>2025</year>
          ,
          <volume>15</volume>
          , 174, doi:https://doi.org/10.3390/educsci15020174.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; Prasad,
          <string-name>
            <surname>P.G.</surname>
          </string-name>
          ; Schroeder,
          <string-name>
            <surname>N.L.</surname>
          </string-name>
          <article-title>Learning about AI: A Systematic Review of Reviews on AI Literacy</article-title>
          .
          <source>Journal of Educational Computing Research</source>
          <year>2025</year>
          ,
          <volume>63</volume>
          , doi:https://doi.org/10.1177/07356331251342081.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Pei</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Jing</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          <article-title>Empowering Preservice Teachers' AI Literacy: Current Understanding, Influential Factors, and Strategies for Improvement</article-title>
          .
          <source>Computers and Education: Artificial Intelligence</source>
          <year>2025</year>
          ,
          <volume>8</volume>
          , 100406, doi:https://doi.org/10.1016/j.caeai.
          <year>2025</year>
          .
          <volume>100406</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Kelley</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; Wenzel,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <article-title>Advancing Artificial Intelligence Literacy in Teacher Education through Professional Partnership Inquiry</article-title>
          .
          <source>Education Sciences</source>
          <year>2025</year>
          ,
          <volume>15</volume>
          , 659, doi:https://doi.org/10.3390/educsci15060659.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Mazzinghy</surname>
            ,
            <given-names>A.O. da C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>R.M.</given-names>
            dos S. e; Fernandes, R.M.
          </string-name>
          ;
          <string-name>
            <surname>Batista</surname>
            ,
            <given-names>E.D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Picanço</surname>
            ,
            <given-names>A.R.S.</given-names>
          </string-name>
          ; Monteiro, N.J.; de Amorim,
          <string-name>
            <given-names>D.M.</given-names>
            ;
            <surname>Cardoso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. de F.O.</given-names>
            ;
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.M.N.</surname>
          </string-name>
          <article-title>da;</article-title>
          <string-name>
            <surname>Martins</surname>
          </string-name>
          , V.W.B.
          <article-title>Assessment of the Benefits of the ISO/IEC 42001 AI Management System: Insights from Selected Brazilian Logistics Experts: An Empirical Study</article-title>
          .
          <source>Standards</source>
          <year>2025</year>
          ,
          <volume>5</volume>
          , 10, doi:https://doi.org/10.3390/standards5020010.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Soler</surname>
            ,
            <given-names>G.J.</given-names>
          </string-name>
          ; De Nigris,
          <string-name>
            <given-names>S.</given-names>
            ;
            <surname>Bassani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ;
            <surname>Sanchez</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.</surname>
          </string-name>
          ; Evas,
          <string-name>
            <given-names>T.</given-names>
            ;
            <surname>Andre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-A.</given-names>
            ;
            <surname>Boulange</surname>
          </string-name>
          ,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <article-title>Harmonised Standards for the European AI Act Available online</article-title>
          : https://publications.jrc.ec.europa.eu/repository/handle/JRC139430
          <source>(accessed on 20 August</source>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Gouseti</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>James</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Fallin</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Burden</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>The Ethics of Using AI in K-12 Education: A Systematic Literature Review</article-title>
          .
          <source>Technology Pedagogy and Education</source>
          <year>2024</year>
          ,
          <volume>34</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>22</lpage>
          , doi:https://doi.org/10.1080/1475939x.
          <year>2024</year>
          .
          <volume>2428601</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28] Wilson,
          <string-name>
            <given-names>C.</given-names>
            ;
            <surname>Atabey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ;
            <surname>Revans</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          <article-title>Towards Child-Centred AI in Children's Learning Futures: Participatory Design Futuring with SmartSchool and the Co-Design Stories Toolkit</article-title>
          .
          <source>International Journal of Human-Computer Studies</source>
          <year>2025</year>
          ,
          <volume>199</volume>
          , 103431, doi:https://doi.org/10.1016/j.ijhcs.
          <year>2024</year>
          .
          <volume>103431</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Monib</surname>
            ,
            <given-names>W.K.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Qazi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Apong</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Azizan</surname>
          </string-name>
          , M.T.;
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>L.D.</given-names>
          </string-name>
          ; Yassin,
          <string-name>
            <surname>H. Generative AI</surname>
          </string-name>
          and
          <article-title>Future Education: A Review, Theoretical Validation, and Authors' Perspective on Challenges and Solutions</article-title>
          .
          <source>PeerJ Computer Science</source>
          <year>2024</year>
          ,
          <volume>10</volume>
          , e2105, doi:https://doi.org/10.7717/peerjcs.2105.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>