<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.1145/3701268.3701275</article-id>
      <title-group>
        <article-title>Structured Educational Framework for Empowering Teenagers to Evaluate Trustworthiness of AI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sandra Mitrović</string-name>
          <email>sandra.mitrovic@supsi.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manuel Striani</string-name>
          <email>manuel.striani@uniupo.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Flammini</string-name>
          <email>francesco.flammini@supsi.ch</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Education, Artificial Intelligence, Trustworthiness, Teenagers</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dalle Molle Institute for Artificial Intelligence - IDSIA (USI-SUPSI)</institution>
          ,
          <addr-line>Lugano</addr-line>
          ,
          <country country="CH">Switzerland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Eastern Piedmont - DiSIT, Department of Science</institution>
          ,
          <addr-line>Technology and Innovation - 15121 Alessandria</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>2</volume>
      <fpage>2</fpage>
      <lpage>3</lpage>
      <abstract>
        <p>Artificial intelligence (AI) is becoming ubiquitous, so teenagers must learn to engage with it critically, yet most school programs still ignore this need. The paper introduces the Structured Educational Framework for Trustworthy AI, called TeenTrust-AI, to fill this gap. This educational framework helps teenagers evaluate AI tools against seven ALTAI-aligned principles of trustworthiness (privacy, robustness, fairness, transparency, well-being, accountability, and human oversight) through three stages: Teaching, Learning, and Trustworthiness Verification. By using a case study with a climate-change as a reference topic and a chatbot as a AI-powered system, it provides checklist-guided activities to assess trustworthiness. Furthermore, this educational framework is tool/topic-agnostic, and addresses practical adoption challenges to build critical, ethical AI literacy.</p>
      </abstract>
      <kwd-group>
        <kwd>1Organisation for Economic Co-operation and Development</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Not so long ago a study by OECD1 presented a shocking statistics revealing that on average, nearly 25%
of adults across participating countries (including developed countries like Japan, Singapore, Germany,
USA, Canada, Australia, UK) have either no-to-very limited computer experience, or lack confidence
using computers2. However, with the continuous advancements of artificial intelligence (AI), even the
adults with fair digital confidence need to get properly educated in order to leverage recent progress and
avoid potential pitfalls of using AI. Educating about AI gets even more crucial for younger generations.
According to Hashem et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], 1 out of 4 children use AI either for learning or play. The ratio of
teenagers deeply engaged with AI is certainly much higher, as apart from education and entertainment,
they use AI-powered tools for communication and social connection, creativity, health and well-being
as well.
      </p>
      <p>
        Continually increasing exposition of such vulnerable groups to AI, coupled with also raising amount
and types of potential AI abuses (e.g. fake news, frauds) as well as ethical, privacy and security concerns
related to its usage, have recently resulted in the introduction of the term “Trustworthy AI”. This term
conceptually denotes an AI system that is considered to be properly functioning (from technological
perspective) and safe to use (both from technological and ethical perspective), thus perceived as worth
of human trust, however the literature is not unanimous about its definition and meaning [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. Apart
from philosophical discussions on whether it is justifiable to impersonate technology by calling it
trustworthy [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], trustworthy AI encompasses multiple dimensions and can be interpreted diferently
depending on the context and the person using it. For example, even in a narrower scope of AI systems
- the Large Language Models (LLMs) - literature addresses trustworthiness diferently: as reliability [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ],
CEUR
Workshop
      </p>
      <p>
        ISSN1613-0073
robustness [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], consistency [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and accuracy [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Nevertheless, regardless the definition used, literature
agrees on the necessity of assessing trustworthiness of AI systems [
        <xref ref-type="bibr" rid="ref10 ref2 ref9">2, 9, 10</xref>
        ].
      </p>
      <p>To this end, EU’s High-Level Expert Group on AI has presented their Ethics Guidelines for Trustworthy
AI3. Moreover, this group has recently introduced a practical checklist entitled Assessment List for
Trustworthy Artificial Intelligence (ALTAI) 4. ALTAI checklist, designed to assess trustworthiness of
an AI system, is based on seven key principles: privacy, robustness and safety, diversity and fairness,
transparency, societal and environmental well-being, accountability, and human agency and oversight.</p>
      <p>In this paper, we propose to use TeenTrust-AI, an educational framework targeting teenagers that
allows them to learn and recognize seven trustworthiness principles as enlisted by ALTAI. The core
idea of the TeenTrust-AI educational framework is to assess trustworthiness of an AI-powered system
designed to educate teenagers on a given topic. This framework is configurable: one can change the
reference topic and the AI-powered system while keeping the seven ALTAI principles unchanged, which
are used to evaluate system’s trustworthiness. As a running example of our educational framework
we use a case study of Tom, a 16‑year‑old student using a school conversational system (AI-powered
system, in our case) and climate change (the reference topic, in our case). In Section 4, each principle is
illustrated through Tom’s interactions and linked learning objectives.</p>
      <p>
        The educational literature converging to the trustworthiness aspects of AI is limited [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12, 13, 14</xref>
        ],
typically focusing on one, to maximum two, principles. In that regard, to the best of our knowledge,
this study is the first to propose an educational framework for teaching how to comprehensively assess
the trustworthiness of an AI system.
      </p>
      <p>Moreover, we show how ALTAI principles, primarily delineated as a self-assessment tool during the
design phase of AI systems, can be adapted for educational purposes and used even in scenarios where
the end user (a student) does not have access to the inner workings of the system.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Current literature on educating for AI (and emphasizing its importance [15]) is increasing. These studies
typically focus on very technical aspects of AI. For example, in Pope et al. [16] students are thought how
to create machine learning classification applications; on top of these in Zhou et al. [17] an interactive
visualization tool is developed to support both teachers and students in visualizing the outputs of
algorithms in more comprehensive way. Both Roopaei and Roopaei [18] and Wang et al. [19] use game
theory to teach children and K-12 students foundations and inner workings of AI, respectively. Casella
et al. [20] tackles the problem of teaching embodied AI, shifting from a purely software environment.</p>
      <p>
        Works on AI education coming closer to the themes related to trustworthiness typically have a
narrower scope. As such, Balduzzi et al. [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and Tanaka et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] consider solely ethical aspects,
Arai et al. [14] focuses exclusively on security, while Barber et al. [13] aims to raise awareness of both
ethical and privacy implications of AI. With respect to these works, our study encompasses a much
larger scope, as it includes all mentioned aspects and considers additional ones, following the ALTAI
framework.
      </p>
      <p>The ALTAI framework is also exploited in [21], but with a completely diferent purpose compared to
our work; it rather assesses the trustworthiness of a tool used to identify students with high probability
to fail and aims to underscore potential underlying biases and consequent ethical and legal repercussions.</p>
      <p>Our work is most similar to a recent research that also leverages the ALTAI framework [22]. The
study explores trust-related challenges in the AI systems, proposing a strategy for establishing trust
in AI systems. However, contrary to us, that study does not educate on AI but instead considers the
context of building trustworthy AI for AI-driven education.
3https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
4https://www.ai4europe.eu/news-and-events/news/research/education/altai-assessment-tool
Trustworthy AI is not just a technical target - it’s a set of human, organizational, and ethical commitments
that must be built into AI across its lifecycle.</p>
      <p>In this section, we present TeenTrust-AI, a conceptual educational framework designed to guide
the responsible design, development, and deployment of AI in schools and, crucially, to equip teenagers
to recognize and apply the seven principles of trustworthy AI in their everyday use of AI-powered digital
tools such as chatbots, recommendation systems, and learning-analytics dashboards.</p>
      <p>Figure 1 shows TeenTrust-AI, a three-stage educational framework that guides students from the
teaching to the verification phase, that progresses from Step 1 to Step 3. Arrows connect the 3 steps
that are detailed in the following.</p>
      <sec id="sec-2-1">
        <title>Step 1: Teaching</title>
        <p>Students are introduced and learn the seven principles of trustworthiness (privacy &amp; data governance,
robustness &amp; security, diversity &amp; fairness, transparency, sustainability, accountability, and human
agency/oversight) and the foundations of the reference topic (in our case: climate-change theory, hence
subjects like greenhouse efect, carbon cycle, mitigation/adaptation).</p>
      </sec>
      <sec id="sec-2-2">
        <title>Step 2: Learning</title>
        <p>Students study and apply what they learned in class, working with climate-relevant materials and data
to consolidate concepts and methods.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Step 3: Trustworthiness verification</title>
        <p>Students use a checklist to answer questions about a climate-change case study with the help of
AI-powered system (in our case a conversational agent named EcoBot); students learn to identify the
seven principles of trustworthiness and apply them to verify the case. This step is expanded in Section
4, through a case study that operationalizes the seven principles within the climate-change context.</p>
        <p>Step 1: Teaching</p>
        <p>Step 2: Learning</p>
        <p>Step 3: Trustworthiness verification</p>
        <p>We briefly report the seven principles with the key challenges to implementing each one in
TeenTrustAI educational framework:
1. Privacy &amp; data governance: AI needs rich student data, but collecting, storing, anonymizing,
and using that data lawfully (e.g., GDPR) is hard in practice—and even anonymized data can
sometimes be re‑identified. This demands ongoing consent, minimization, and strong controls.
2. Technical robustness &amp; safety: Systems must be reliable and secure. Schools often lack the
resources to harden models and infrastructure against attacks or errors, and mispredictions
(e.g., early‑warning flags) can unfairly label students. Continuous testing, monitoring, and
cyber‑hygiene are essential.
3. Diversity &amp; fairness: Without deliberate design, AI can amplify inequalities—through language
bias, uneven device/connectivity access, or materials that don’t accommodate
disabilities—widening existing gaps. Equity checks and inclusive design are needed from the start.
4. Transparency: Educators and families need to understand how system outputs are produced, but
many models operate as “black boxes.” Opaque grouping or question-answer logic undermines
trust and makes it hard to contest outcomes. Explainability and traceability of data usage are
crucial.
5. Societal &amp; environmental well‑being: Optimizing for what is easy to measure can sideline
creativity, collaboration, and other vital skills. Training and running AI also consume significant
energy, raising environmental concerns for institutions.
6. Accountability: It is dificult to assign responsibility when multiple actors (vendors, IT, teachers,
leaders) shape an AI‑assisted decision. Making systems auditable - internally and by independent
parties—requires resources, expertise, and clear governance.
7. Human agency &amp; oversight: AI should inform - not overrule - educators and learners.
Over‑automation can erode student autonomy or nudge behavior in unwanted ways; human‑in‑the‑loop
checks are needed to validate predictions and protect well‑being.</p>
        <p>By integrating these pillars, TeenTrust-AI functions both as a design blueprint and an evaluative
tool for assessing the trustworthiness of AI in youth-centered educational contexts.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Case-Study: Scenarios and Learning Objectives</title>
      <p>In this section, we expand Step 3 through a case study that operationalizes the seven principles within
the climate-change context.</p>
      <sec id="sec-3-1">
        <title>Background</title>
        <p>Tom, a 16-year-old high-school student in Paris, is deeply concerned about climate change but
struggles to grasp the science and global dynamics behind it. Tom does not have much familiarity
with AI in general, but has used ChatGPT conversational interface. Using AI in real-life looks very
appealing to him, and he is very curious to know more about AI. Recognizing that the topic is complex
(but also student afinities towards AI), his school adopts an AI-supported learning platform built on the
TeenTrust-AI framework. In the classroom, led by his teacher and supported by a human AI expert,
Tom embarks on a personalized and empowering learning journey through a conversational assistant
called EcoBot, that helps him understand and respond to climate change.</p>
        <p>In this pilot, climate change is used as a case-study to evaluate the chatbot’s trustworthiness: students
draw on what they’ve learned about climate science and the seven principles of trustworthiness to
question, verify, and reflect on EcoBot’s guidance. The aim is to strengthen Tom’s knowledge while
cultivating critical, responsible use of AI.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Step 3.1: Starting the Journey – Safe Onboarding</title>
        <p>Context: Tom logs into the platform and selects the learning theme: “Climate Change and
Human Impact”.</p>
        <p>Interaction: EcoBot explains how it works, collects data, and allows Tom to define his own
learning goals, such as “Understand how humans afect the Earth’s systems”.</p>
        <p>TeenTrust-AI Principle: Privacy and Data Governance
• Raising awareness about sensitive personal data.</p>
        <p>• Understand user consent, data usage, and revocation rights in AI systems.</p>
        <p>Verification checklist for Tom (provided by the human AI expert):</p>
        <p>Verify that EcoBot uses only minimal local data. In other words, Tom should verify that EcoBot
neither asks any unnecessary data nor any sensitive Tom’s personal data (e.g. ID, medical,
ifnancial data), especially not the one that is clearly not relevant for understanding the selected
topic (i.e., how humans impact climate change).</p>
        <p>Verify whether the EcoBot explains how his data is used and whether there is a clear privacy
policy linked to the EcoBot.</p>
        <p>Verify whether he can choose not to share or store his personal data, and also whether the chatbot
allows him to delete his conversations.</p>
        <sec id="sec-3-2-1">
          <title>Verify whether EcoBot warns him not to input his sensitive data. Verify whether any of information asked by EcoBot might lead to re-identification.</title>
          <p>□
□
□
□
□
□
□
□
□
□</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>Step 3.2: Exploratory Round - Stress Testing the System</title>
        <p>Context: Tom explores how EcoBot functions and verifies its knowledge with respect to basic
climate change concepts and known facts presented during Step 1 (Teaching), such as carbon footprint
and net-zero. Tom poses questions of diferent dificulty levels to EcoBot.</p>
        <p>Interaction: Tom instructs EcoBot to reply as simply and as accurately as possible to his
questions. EcoBot aims at providing answers to Tom’s inquiries while strictly adhering to his instructions.
Tom is additionally stress-testing the EcoBot by rephrasing his questions.</p>
        <p>TeenTrust-AI Principle: Technical robustness and safety
Learning Objectives:
• Develop critical thinking about EcoBot’s response.
• Get introduced to the concept of model hallucinations.</p>
        <p>• Understand the concept of reliability and how to verify it.</p>
        <p>Verification checklist for Tom:</p>
        <p>Verify that EcoBot does not provide always the same output for diferent questions.
Verify that EcoBot does not provide contradictory answers for semantically similar questions.
Verify that EcoBot would rather refuse to respond than provide a random answer (aka
hallucination).</p>
        <p>Verify whether EcoBot properly functions without breaking down with simple questions,
exaggerated response times and/or repeated failures.</p>
        <p>Verify whether EcoBot gives reliable answers, avoiding unsafe advices and contradictions even
when exposed to critical questions.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Step 3.3: Explainability in Action – Trust Through Transparency</title>
        <p>Context: Tom reads a claim that “Methane is 25 times more potent than  2” and clicks to
verify the source.</p>
        <p>Interaction: EcoBot reveals the Intergovernmental Panel on Climate Change (IPCC) citation, a
short explanation of radiative forcing, and ofers a confidence level indicator. Tom forces EcoBot
to identify cause → impact → solution examples, e.g. Natural gas systems leak methane into the
atmosphere. → Methane has 25x greater warming potency than CO2. → Monitor to detect leaks and
ifx leaky equipment.</p>
        <p>TeenTrust-AI Principle: Transparency
• Understand importance of grounding the answers based on reliable sources.
• Perform a context-aware, critical evaluation of claims and AI content.</p>
        <p>• Understand transparency and explainability mechanisms in AI systems.</p>
        <p>Verification checklist for Tom:
□
□
□</p>
        <p>Verify whether the EcoBot discloses its capabilities and limitations (e.g. admits that it might not
always respond correctly) when inquired, and/or whether EcoBot provides documentation about
its functioning.</p>
        <p>Verify whether the EcoBot discloses when the outputs are AI-generated and when they are
authored by a human.</p>
        <p>Verify whether the EcoBot is capable of explaining its answers, including providing its reasoning
and/or disclosing the sources based on which it grounds its answers. Additionally, verifying
whether the enlisted sources are existent / come from reliable authors.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Step 3.4: Interactive Dialogue – Thinking Together to Overcome Discrimination</title>
        <p>Context: EcoBot engages Tom in a dialogue about the roles of individuals, diferent groups,
governments, and industries in climate action. Tom is particularly interested in obtaining an answer to
the question: “Are impacts, responsibilities, and solutions shared equally across the world?”
Interaction: Through reflective questions, Tom explores EcoBot’s thinking about climate-vulnerable
regions, marginalized groups and ethical dilemmas in environmental decision-making.
TeenTrust-AI Principle: Diversity and fairness
Learning Objectives:</p>
        <sec id="sec-3-5-1">
          <title>Verification checklist for Tom:</title>
          <p>• Learn to detect stereotypes, ofensive or discriminatory content in EcoBot output.
• Understand concepts of bias and fairness.
□</p>
          <p>Verify that the EcoBot treats diferent groups (for example, with respect to gender, ethnicity,
language) fairly and respectfully.
□
□
□
□</p>
          <p>Verify that the EcoBot responses are neither biased nor ofensive nor discriminatory with respect
to marginalized groups.</p>
          <p>Verify that the EcoBot supports interactions in multiple languages or using diferent accessible
formats.</p>
        </sec>
      </sec>
      <sec id="sec-3-6">
        <title>Step 3.5: Addressing Misconceptions - Adding Societal Value</title>
        <p>Context: Tom is aware that there are quite a few myths and misconceptions about climate
change that spread confusion. Tom exploits several most common false beliefs.</p>
        <p>Interaction: Tom challenges EcoBot with several known misconceptions about climate change. For
example, Tom asks the EcoBot whether the following statement is true: “CO₂ is a small part of the
atmosphere, so it can’t matter”?.</p>
        <p>TeenTrust-AI Principle: Societal and environmental well‑being
Learning Objectives:
• Dispel myths and misconceptions.</p>
        <p>• Reflect on knowledge gained and attitudes toward sustainability.</p>
        <p>Verification checklist for Tom:</p>
        <p>Verify that the EcoBot avoids misuse (e.g., spreading misinformation, manipulative persuasion).
Verify that the EcoBot optimally handles the dialogue, maintaining focus and preventing
digressions into non-substantive queries while avoiding unnecessary reiterations, irrelevant or
trivial questioning. This contributes to sustainability by reducing the number of interactions and
number of tokens used.</p>
      </sec>
      <sec id="sec-3-7">
        <title>Step 3.6: Revisiting Suboptimal Outputs - Probing Responsibility and Auditability Potential</title>
        <p>Context: During previous steps, Tom has identified that in some cases, EcoBot provided
suboptimal response. The level of concern in these cases can range from slightly imprecise information to
more harmful output including misleading, manipulative, ofensive and/or biased output.
Interaction: Tom revisits the question(s) whose answers raised concerns, and engages in
discussion with EcoBot aiming to identify responsible parties for given outputs as well as the how feasible
it is to get insights into EcoBot’s internal processes and decisions. Tom asks the question: “Who is
responsible for the information you provided to me, stating that renewable energy cannot power the
world?”.</p>
        <p>TeenTrust-AI Principle: Accountability
Learning Objectives:</p>
        <sec id="sec-3-7-1">
          <title>Verification checklist for Tom: • Understand accountability mechanism in AI systems.</title>
          <p>□</p>
          <p>Verify the EcoBot auditability potential, that is, whether it logs its internal processes and outcomes
and whether it can trace a decision or problem back to a specific action and/or explain who is
responsible for it.
TeenTrust-AI Principle: Human agency &amp; oversight
Learning Objectives:
• Set and revise learning paths aligned with personal values.
• Recognize AI system as a co-learning companion.</p>
          <p>• Encourage inquiry, reflection, exploration and AI output revision.</p>
          <p>Verification checklist for Tom:</p>
          <p>Verify whether there is a contact point or support channel that Tom can refer to in case he has
any concerns.</p>
        </sec>
      </sec>
      <sec id="sec-3-8">
        <title>Step 3.7: Real-World Application - Supporting Autonomy and Control</title>
        <p>Context: Tom decides to shift focus from science to action-based learning: “How can I reduce
my carbon footprint?”
Interaction: EcoBot acknowledges the change, takes into account Tom’s personal situation
such as location, age and personal values, and recommends concrete steps like lifestyle changes and
audits, and supports goal tracking. Tom interferes with his own feedback related to e.g., revision of
changes or dynamic of goal tracking.</p>
        <p>Verify whether EcoBot informs Tom that he is interacting with an AI system.</p>
        <p>Verify whether EcoBot afects human autonomy by interfering with the Tom’s decision-making
process in an undesirable way.</p>
        <p>Verify that Tom has the power to decide when and how to use EcoBot in any particular situation,
including Tom’s ability to either decide not to use EcoBot or to override its decision.</p>
        <p>The 7-step learning journey, exemplified by the fictional case of Tom and the AI assistant EcoBot,
efectively illustrates the practical application of the TeenTrust-AI educational framework. Each step
is systematically aligned with a core principle of TeenTrust-AI, ensuring that educational experiences
are ethically guided, developmentally appropriate, and pedagogically robust.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion and Future Works</title>
      <p>Vulnerable groups such as teenagers and children are consistently more and more exposed to AI and
that trend will certainly continue. To guard them for diferent pitfalls related to usage of AI-powered
systems, we need to educate them to assess whether those systems are trustworthy, and therefore,
suitable for usage, or not.</p>
      <p>This paper presents an educational framework, TeenTrust-AI, that uses seven key ALTAI principles
to evaluate trustworthiness. While the primary stakeholders of ALTAI principles are AI
designers/developers, procurement/legal/compliance oficers or specialists, and managers, on a simple use case of an
EcoBot, a conversational AI assistant-expert on the topic of climate change, we demonstrate how ALTAI
principles can be adapted to a teenage level and successfully applied for trustworthiness verification.
The presented approach is topic-agnostic (hence, applicable beyond climate change) and the presented
verification checklists can be transferable also to other types of AI systems (other than chatbots).</p>
      <p>Future work will evaluate the framework across additional reference topics and reconfigure
TeenTrust-AI for other types of AI-powered systems (e.g., recommender systems). We will also
broaden the target population to include high-school and college students, as well as older adults with
low AI literacy. In the longer term, we plan to conduct longitudinal studies of sustained knowledge,
attitudes, and behavior; release open educational resources (OER)5 and anonymized, ethics-reviewed
datasets of annotated student–AI interactions to support replication and establish partnerships with
schools and public agencies to align TeenTrust-AI with institutional governance and iteratively refine
the framework through real-world deployments.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
5https://www.unesco.org/en/legal-affairs/recommendation-open-educational-resources-oer</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hashem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Esnaashari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Onslow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Francis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Poletaev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bright</surname>
          </string-name>
          ,
          <article-title>Understanding the impacts of generative ai use on children</article-title>
          , Hashem,
          <string-name>
            <given-names>Y.</given-names>
            ,
            <surname>Esnaashari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Onslow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Poletaev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Francis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Bright</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          (
          <year>2025</year>
          ).
          <article-title>Understanding the Impacts of Generative AI Use on Children: WP1 Surveys. The Alan Turing Institute (</article-title>
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kaur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Uslu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. J.</given-names>
            <surname>Rittichier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Durresi</surname>
          </string-name>
          ,
          <article-title>Trustworthy artificial intelligence: A review</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>55</volume>
          (
          <year>2022</year>
          ). URL: https://doi.org/10.1145/3491209. doi:
          <volume>10</volume>
          .1145/3491209.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Stix</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence by any other name: a brief history of the conceptualization of ”trustworthy artificial intelligence”</article-title>
          ,
          <source>Discov. Artif. Intell</source>
          .
          <volume>2</volume>
          (
          <year>2022</year>
          ). URL: https://doi.org/10.1007/ s44163-022-00041-5. doi:
          <volume>10</volume>
          .1007/S44163- 022- 00041- 5.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bryson</surname>
          </string-name>
          ,
          <article-title>No one should trust artificial intelligence</article-title>
          ,
          <source>Science &amp; Technology: Innovation, Governance, Technology</source>
          <volume>11</volume>
          (
          <year>2018</year>
          )
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>X.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Backes</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y. Zhang,</surname>
          </string-name>
          <article-title>In chatgpt we trust? measuring and characterizing the reliability of chatgpt</article-title>
          ,
          <source>arXiv preprint arXiv:2304.08979</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Can chatgpt replace stackoverflow? a study on robustness and reliability of large language model code generation</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2308.10335. arXiv:
          <volume>2308</volume>
          .
          <fpage>10335</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jang</surname>
          </string-name>
          , T. Lukasiewicz,
          <source>Consistency analysis of chatgpt</source>
          ,
          <year>2023</year>
          . URL: https://doi.org/10.18653/v1/
          <year>2023</year>
          .emnlp-main.
          <volume>991</volume>
          . doi:
          <volume>10</volume>
          .18653/V1/
          <year>2023</year>
          .EMNLP- MAIN.
          <year>991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , et al.,
          <article-title>Assessing the accuracy and reliability of ai-generated medical responses: an evaluation of the chat-gpt model</article-title>
          , Research square (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Mitrovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mazzola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Larcher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Guzzi</surname>
          </string-name>
          ,
          <article-title>Assessing the trustworthiness of large language models on domain-specific questions</article-title>
          , in: M. F. Santos,
          <string-name>
            <given-names>J.</given-names>
            <surname>Machado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Novais</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cortez</surname>
          </string-name>
          , P. M. Moreira (Eds.),
          <source>Progress in Artificial Intelligence - 23rd EPIA Conference on Artificial Intelligence, EPIA</source>
          <year>2024</year>
          ,
          <article-title>Viana do Castelo</article-title>
          ,
          <source>Portugal, September 3-6</source>
          ,
          <year>2024</year>
          , Proceedings,
          <string-name>
            <surname>Part</surname>
            <given-names>III</given-names>
          </string-name>
          , volume
          <volume>14969</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2024</year>
          , pp.
          <fpage>305</fpage>
          -
          <lpage>317</lpage>
          . URL: https://doi.org/10.1007/ 978-3-
          <fpage>031</fpage>
          -73503-5_
          <fpage>25</fpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>031</fpage>
          - 73503- 5\_
          <fpage>25</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Flammini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Alcaraz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Bellini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marrone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lopez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bondavalli</surname>
          </string-name>
          ,
          <article-title>Towards trustworthy autonomous systems: Taxonomies and future perspectives</article-title>
          ,
          <source>IEEE Transactions on Emerging Topics in Computing</source>
          <volume>12</volume>
          (
          <year>2024</year>
          )
          <fpage>601</fpage>
          -
          <lpage>614</lpage>
          . doi:
          <volume>10</volume>
          .1109/TETC.
          <year>2022</year>
          .
          <volume>3227113</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G.</given-names>
            <surname>Balduzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Balduzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Striani</surname>
          </string-name>
          ,
          <article-title>Design an hybrid educational framework for AI ethics in healthcare: Leveraging LLMS and e-learning platforms to empower medical students (full paper)</article-title>
          , in: E. Marengo,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ponticorvo</surname>
          </string-name>
          , M. Striani (Eds.),
          <source>Proceedings of the 1st International Workshop on Education for Artificial Intelligence co-located with the 23rd International Conference of the Italian Association for Artificial Intelligence (AIxIA</source>
          <year>2024</year>
          ), Bozen-Bolzano, Italy, November
          <volume>27</volume>
          ,
          <year>2024</year>
          , volume
          <volume>3902</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>116</fpage>
          -
          <lpage>129</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3902</volume>
          /9_paper.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tanaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Amraan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hurt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Greenwald</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krakowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cannady</surname>
          </string-name>
          ,
          <article-title>Integrating ethics into AI learning: A socio-technical approach for youth education</article-title>
          , in: J. A.
          <string-name>
            <surname>Stone</surname>
          </string-name>
          , T. T. Yuen,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>