<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Developing and Validating a Multidimensional AI Literacy Questionnaire: Operationalizing AI Literacy for Higher Education</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Gabriele Biagini</string-name>
          <email>gabriele.biagini@unifi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stefano Cuomo</string-name>
          <email>stefano.cuomo@unifi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria Ranieri</string-name>
          <email>maria.ranieri@unifi.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Florence</institution>
          ,
          <addr-line>Florence</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As Artificial Intelligence (AI) permeates numerous aspects of daily life, fostering AI literacy in higher education becomes vital. This study presents the development and validation of an AI Literacy Questionnaire designed to assess AI literacy across four dimensions, i.e., knowledge-related, operational, critical, and ethical. The questionnaire builds upon the frameworks proposed by Cuomo et al. (2022) and covers a broad spectrum of skills and knowledge, offering a comprehensive and versatile tool for measuring AI literacy. The instrument's reliability and construct validity have been confirmed through rigorous statistical analyses on data collected from a sample of university students. This study acknowledges the challenges posed by the lack of a universally accepted definition of AI literacy and proposes this questionnaire as a robust starting point for further research and development. The AI Literacy Questionnaire provides a crucial resource for educators, policymakers, and researchers as they navigate the complexities of AI literacy in an increasingly AI-infused world.</p>
      </abstract>
      <kwd-group>
        <kwd>Artificial Intelligence</kwd>
        <kwd>AI literacy</kwd>
        <kwd>Scale development</kwd>
        <kwd>Questionnaire 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>1.1. The Pertinence of AI literacy</title>
        <p>
          With its rapid advancement Artificial Intelligence (AI) is increasingly permeating areas of daily
life and used in various contexts from medicine to literature [
          <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
          ]. In this dynamic landscape,
higher education institutions have a unique opportunity to enhance students' critical skills and
knowledge in AI. To remain relevant, higher education should confront with the demands of this
rapidly evolving world, and one crucial aspect is fostering AI literacy among students as a critical
academic skill [
          <xref ref-type="bibr" rid="ref3 ref4 ref5">3,4,5</xref>
          ].
        </p>
        <p>
          Traditionally, AI concepts have primarily been taught in universities, with a focus on computer
science and engineering principles [
          <xref ref-type="bibr" rid="ref3 ref6 ref7 ref8">3,6,7,8</xref>
          ]. This approach has generated obstacles and barriers
to the development of AI literacy amongst the public [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>
          Furthermore, while the importance of AI literacy research has grown in recent years, there is still
no widely accepted definition of AI literacy [
          <xref ref-type="bibr" rid="ref1 ref10">1,10</xref>
          ], being "AI literate" commonly referred to the
capacity of comprehending, utilizing, monitoring, and engaging in critical reflection on AI
applications, without necessarily possessing the ability to develop AI models oneself and
applications [
          <xref ref-type="bibr" rid="ref10 ref9">9,10</xref>
          ].
0000-0002-6203-122X (G.Biagini); 0000-0003-3174-7337 (S.Cuomo); 0000-0002-8080-5436 (M.Ranieri)
© 2023 Copyright for this paper by its authors.
        </p>
        <p>Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).</p>
        <p>CEUR Workshop Proceedings (CEUR-WS.org)</p>
      </sec>
      <sec id="sec-1-2">
        <title>1.2. Assessing Ai Literacy</title>
        <p>
          Even though there is no consensus on what AI literacy is, several efforts have been made to
develop measurement tools that capture the multidimensionality of AI literacy. However, while
some of them were developed specifically for evaluating AI literacy after a course [
          <xref ref-type="bibr" rid="ref11 ref12">11,12</xref>
          ], other
questionnaires focus on a few dimensions of AI, such as emotive or collaborative aspects, while
ignoring the same idea of AI literacy due to its intrinsic complexity [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The "Attitudes Towards
Artificial Intelligence Scale" [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], the "General Attitudes Towards Artificial Intelligence Scale"
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], and the "Artificial Intelligence Anxiety Scale" [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] are three examples of this phenomenon.
In order to address this limitation, we initially constructed a multidimensional framework for AI
literacy rooted on the Calvani et al. (2008) concept of digital literacy, which provided the ground
for the Cuomo et al. [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] AI Literacy framework. Subsequently, we developed an AI literacy
Questionnaire that incorporated items from existing assessment tools, as well as new or adapted
items, all of which were aligned with the original AI literacy framework.
        </p>
        <p>In this paper, we aim at presenting an assessment tool that we have developed, focusing on the
validation procedure that we have carried out to ensure the reliability of the tool. Before
presenting the evaluation tool and the validation process, we introduce the background of the
study, that is the above-mentioned AI literacy framework.</p>
      </sec>
      <sec id="sec-1-3">
        <title>1.3. The AI literacy framework: A multidimensional approach</title>
        <p>
          The complexity and multifaceted nature of AI literacy necessitates a comprehensive framework
that addresses the different aspects at the core of AI understanding. Our previous research
proposed a novel approach, consisting of four key dimensions that collectively encompass the full
spectrum of AI literacy [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Together, these dimensions provide a multifaceted lens through
which AI literacy can be explored, assessed, and cultivated. They emphasize the necessity of
moving beyond mere passive consumption of AI to a more critical and responsible understanding,
thereby offering a holistic, integrative pathway for approaching AI literacy. Going into details the
framework is composed by the following dimensions:
- Knowledge-related Dimension: it encompasses the understanding of fundamental AI concepts,
focusing on basic skills and attitudes that do not require preliminary technological knowledge
[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. It includes understanding AI types, machine learning principles, and various AI applications
such as artificial vision and voice recognition.
- Operational Dimension: focused on applying AI concepts in various contexts [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], it emphasizes
the ability to design and implement algorithms, solve problems using AI tools, and develop simple
AI applications to enhance analytical and critical thinking [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ].
- Critical Dimension: highlighting AI's potential to engage students in cognitive, creative, and
critical discernment activities [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ], it underscores the importance of effective communication and
collaboration with AI technologies and critical evaluation of their impact on society.
- Ethical Dimension: concerning the responsible and conscious use of AI technologies, this
dimension stresses the balanced view of delicate ethical issues raised by AI, such as the delegation
of personal decisions to a machine [e.g., job placement or therapeutic pathways], and emphasizes
the growing attention towards "AI Ethics", encompassing transparency, fairness, responsibility,
privacy, and security.
        </p>
        <p>Building upon this multidimensional framework, our research takes a pioneering step towards
an empirical understanding of AI literacy. The existing literature, as previously mentioned, tends
to focus on singular aspects of AI or addresses AI literacy in a more compartmentalized manner.
In contrast, our framework serves as the robust foundation for a newly developed questionnaire,
designed to probe the intricate layers of knowledge-related, operational, critical, and ethical
dimensions of AI. This alignment between theoretical structure and practical assessment tool
marks a significant innovation in the field. By weaving these dimensions into a cohesive
instrument, the questionnaire promises not only to assess AI literacy in a more comprehensive
manner but also to ignite further research and applications that recognize the richness and
complexity of engaging with AI. In the following section, we will delve into the specific design and
methodology of the questionnaire, elucidating how it encapsulates the full breadth of the AI
literacy landscape.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>
        Questionnaire-based survey methods are extensively employed in social science, business
management, and clinical research to gather quantitative data from consumers, customers, and
patients [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. During the creation of a new questionnaire, researchers may consult existing
questionnaires with standard formats found in literature references. This article outlines the
process of designing and developing an empirical questionnaire, as well as validating its
reliability and consistency using various statistical methods.
      </p>
      <p>
        The empirical research method employs a survey-based approach that involves several key steps.
The questionnaire was developed following the recommendations of DeVellis [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], and its
development included the following steps: clearly determine the construct to measure, generate
the items’ pool, determine the format for measurement, have the initial items’ pool reviewed by
experts, administer items to a development sample, and finally evaluate the items.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Identifying the constructs related to the topic.</title>
        <p>
          A thorough review of the literature has been conducted to determine the meaningful dimensions
to conceptually represent the idea of AI literacy. This review included insights from seminal
works, including those by Floridi [
          <xref ref-type="bibr" rid="ref21 ref22">21,22</xref>
          ], Ng [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], and Selwyn [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ], among others, and reliable
sources like the European Commission [
          <xref ref-type="bibr" rid="ref24">24,25,26,27,28</xref>
          ], the Joint Research Center [29], and the
Organization for Economic Co-operation and Development [30,31,32]. It brought to the
development of the already presented AI literacy framework [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], including the
knowledgerelated, operational, critical, and ethical dimensions. These dimensions and their definitions (see
above paragraph 1.3) provided the ground to conceptually map the already existing measuring
tools for AI literacy or some of its aspects. The results of this analysis are illustrated in the next
section.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Item generation</title>
        <p>As a first step of the item generation process, we further developed our framework by identifying
more analytical descriptors for the four main dimensions that the questionnaire aimed to
investigate, that is knowledge-related, operational, critical, and ethical. To this purpose we
carried out an examination of relevant literature as well as of seminal institutional documents in
the field (such as the European Commission, JRC, OECD, UNESCO, UNICEF, etc.).
As a result, we operationalized our framework mapping the emerging conceptual elements on it,
thus identifying relevant sub-dimensions. Those conceptual elements provided the ground for
the generation of the items. The graph below (Figure 1) summarizes the item generation process
from examination of the literature to the identification of appropriate descriptors up to the
creation of the items.</p>
        <sec id="sec-2-2-1">
          <title>Literature review (competencies, concepts, dimensions) - on existing frameworks - 38 AI Literacy items 10 items for AI</title>
        </sec>
        <sec id="sec-2-2-2">
          <title>Knowledge-related</title>
          <p>dimension
14 items for AI</p>
        </sec>
        <sec id="sec-2-2-3">
          <title>Operational</title>
          <p>dimension</p>
        </sec>
        <sec id="sec-2-2-4">
          <title>8 items for AI Critical</title>
          <p>dimension</p>
        </sec>
        <sec id="sec-2-2-5">
          <title>6 Items for AI Ethics dimension</title>
        </sec>
        <sec id="sec-2-2-6">
          <title>Literature review on institutional sources (European Commission, HILEG, JRC, OECD, UNESCO,</title>
        </sec>
        <sec id="sec-2-2-7">
          <title>UNICEF) - 38 AI Literacy Items</title>
        </sec>
        <sec id="sec-2-2-8">
          <title>4 items for AI</title>
        </sec>
        <sec id="sec-2-2-9">
          <title>Knowledge-related dimension</title>
        </sec>
        <sec id="sec-2-2-10">
          <title>8 items for AI</title>
        </sec>
        <sec id="sec-2-2-11">
          <title>Operational</title>
          <p>dimension
10 items for AI Critical
dimension
16 Items for AI Ethics
dimension
Before proceeding with the development of a preliminary draft of the questionnaire, in addition
to the analysis of the conceptual elements of AI literacy, a review of already validated
questionnaires on related topics, such as technology competence or digital literacy, was
conducted in order to select items that could be adapted for measuring AI literacy. The table
below summarizes the results of the tools’ examination (Table 1). Only then were we able to come
up with a final Survey draft capable to cover a range of AI-related knowledge, skills, attitudes, and
behaviors that are relevant in today's rapidly evolving technological landscape.
Descriptors or conceptual elements that recurred in at least two independent sources were
transformed into items.</p>
          <p>
            We paid close attention to ensuring that the questionnaire covered a comprehensive range of AI
literacy dimensions, while maintaining clarity and relevance. By following this process, the initial
scale was developed, with 22 items focused on AI knowledge-related dimension, 32 on AI
operational dimension, 30 on AI critical dimension, and 34 on AI ethical dimension. The following
table (Table 2) contains some sample item to clarify the final output of the item generation phase.
[
            <xref ref-type="bibr" rid="ref5">5,10 49,50,51</xref>
            ]
[
            <xref ref-type="bibr" rid="ref10 ref15 ref5">5,10, 15, 29,
30,31,32, 49</xref>
            ]
30
          </p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Expert reviews and face validity</title>
        <p>Face validity is crucial because it assesses whether or not the questionnaire measures what it
intends to measure. It involves reviewing the questionnaire and determining if the items and their
wording seem relevant and appropriate for measuring the construct of interest, that is AI literacy.
To ensure the face validity of the questionnaire, we enlisted the help of a panel of experts (N=5)
in the field of AI and educational assessment. It is worth noting that the use of a small group of
experts for assessing content validity was considered appropriate in this study, as it focused on a
cognitive task that did not require an in-depth understanding of the phenomenon being examined
[33,34,35]. These experts were well-versed in AI literacy and possessed a deep understanding of
the questionnaire's intended constructs. A draft questionnaire was provided to them and their
feedback on the clarity, relevance, and appropriateness of each item were requested.
To ensure a shared understanding of the four AI literacy constructs, the definitions were shared
with each expert. The process of content validation consisted of the following steps.
The expert panel carefully reviewed each item and provided valuable insights and suggestions
for improvement. They pointed out any items that seemed unclear, redundant, or irrelevant to
the construct being measured. Their feedback was essential in refining the questionnaire and
ensuring that it truly captured the essence of AI literacy.</p>
        <p>The experts were initially asked to categorize each object into one of the four dimensions of our
AI literacy framework (i.e., knowledge-related, operational, critical, ethical) following the
methodology advocated by Schriesheim and colleagues [35]. If at least four out of the five experts
assigned the same classification to an item, it was considered as clearly addressing a concept.
There were 118 items in all, and out of those, 15 were either unclassified or erroneously
categorized by two experts, while another 23 were misclassified or unclassified by multiple
experts. As a result, these elements were not included in the study.</p>
        <p>The items were then enhanced, including their phrasing and format by the experts’ suggestions,
14 items were rephrased and 20 items, related to the impact of AI in education, were moved
outside the main corpus of the questionnaire and became an appendix that can be used in
educational context as a wider information section.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. The sample and procedures</title>
        <p>The next step in validating a questionnaire is the administration of the survey. This step involves
collecting data from a sample of participants who will complete the questionnaire. The purpose
of questionnaire administration is to gather responses that will be used to evaluate the reliability,
validity, and overall, the methodological robustness of the questionnaire. Our survey follows the
advice of Likert and Hinkin by using a 5-point Likert scale. A five-point Likert scale was deemed
to be more suitable because our questionnaire will be given online. The questionnaire was
created so that it could be presented electronically on computers or cellphones, allowing for easy
transmission and distribution via the Internet. The actual study was conducted online, in May
2023, via the survey tool “Qualtrics”, while all analyses were implemented using the statistical
software R [36,37]. The questionnaire was administered to a convenience sample, consisting in
University of Florence’s student teachers of first year (2023) of Primary Education. The sample,
after removing the missing data, was composed by 191 student teachers of Primary Education,
including 178 females (93,19%) and 11 males (5,76%). The age ranges were between 18-24
(60,21%) to 55-64 (0,52%), while the highest degree of education completed was the High school
graduation for 128 respondents (67,55%) and a 3-year University degree for 37 respondents
(19,15%). Table 3 summarizes the sample characteristics.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results</title>
      <sec id="sec-3-1">
        <title>3.1. Reliability and validity</title>
        <p>
          The reliability of a questionnaire can be considered as the consistency of the survey results. As
measurement error is present in content sampling, changes in respondents, and differences
across raters, the consistency of a questionnaire can be evaluated using its internal consistency.
Internal consistency is a measure of the inter-correlation of the items of the questionnaire and
hence the consistency in the measurement of intended construct. Internal consistency is
commonly estimated using the coefficient alpha [38], also known as Cronbach's alpha. According
to expert suggestions, Cronbach's alpha value is expected to be at least 0.70 to indicate adequate
internal consistency of a given questionnaire [
          <xref ref-type="bibr" rid="ref20">20,39</xref>
          ]. Low value (below 0.7) of Cronbach's alpha
for a given questionnaire represents poor internal consistency and, hence, poor inter-relatedness
between items. In our survey, Cronbach's alpha, McDonald’s omega [40] the composite reliability
(CR), and the average variance extracted (AVE) were used to assess the survey's reliability and
validity. The findings are shown in Table 4. The survey's Cronbach's alpha score was 0.953, while
the scores for each of the four constructs were, respectively, 0.880, 0.941, 0.858, and 0.914.
Although the reliabilities of each individual constructs were greater than 0.70, the instrument as
a whole scored higher than 0.953, indicating that the latter is more reliable than the individual
constructs. The scale's convergent validity was evaluated using the CR and AVE criteria set out by
Fornell and Larcker [41]. Cronbach's alpha is a more subjective measure of reliability than CR,
and CR values of 0.70 and higher are regarded as satisfactory [42]. The AVE compares the
variance collected by a construct to the variance caused by measurement error. According to Hair
et al. [42], values more than 0.5 show satisfactory convergence; in our scale, CR values were
higher than 0.7, and AVE values were superior to 0.5, which indicated acceptable convergence.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Identify underlying components.</title>
        <p>The fundamental structure of the 60-items measure was further confirmed by exploratory factor
analysis (EFA). Component or factor loadings tell what factors are being measured by the
questions. Questions that measure the same indicators should load onto the same factors. Factor
loadings range from -1.0 to 1.0. The factorial structure of the survey scale was investigated by
means of principal component analyses (PCAs) indicating a four-components structure as
hypothesized by the framework. The four components were rotated using an orthogonal rotation
technique (varimax rotation) to allow for correlations between the components. According to the
PCA results, the four variables with eigenvalues larger than 1.00 were responsible for 69.68% of
the total extracted variance. This study followed the five rules that are frequently used as the
criteria for deciding whether to retain or eliminate items: (1) values larger than the basic root
criterion (eigenvalue &gt;1.00); (2) insignificant factor loadings (0.50); (3) significant factor
loadings on multiple factors; (4) at least three indicators or items in a single factor; and (5) single
item factors [33, 42, 43, 44]. Eventually, 40 items emerged from the 60 items with 10 items
focused on the AI knowledge-related dimension, 12 on AI operational dimension, 10 on AI critical
dimension and 10 items on AI ethical dimension. The results of the EFA are shown in Table 5.
Assumption checks for the final four-factor model resulted in a significant Bartlett’s test of
Sphericity χ2 = 2375, df = 528, p &lt; .001, showing a viable correlation matrix that deviated
significantly from an identity matrix. The Kaiser-Meyer-Olkin Measure of Sampling Adequacy
(KMO MSA) overall was 0.835, indicating amply sufficient sampling. During the confirmatory
factor analysis (CFA) the model with the 40 items loaded on the four factor as described, emerged
as acceptable with a CFI = 0.959, TLI = 0.950, RMSEA = 0.041, SRMR = 0.05 (Table 6).</p>
        <p>Factor Loadings
Knowledge-related dimension</p>
        <p>Operational
dimension</p>
        <p>Critical dimension
Note: Absolute values less than 0.5 were suppressed.
0,665
0,713
0,608
0,759
0,824
0,663
0,668
0,679
0,804
0,671
0,752
0,759
0,748
0,824
0,713
0,617
0,680
0,729
0,607
0,678
0,857
0,729
0,566
0,547
0,720
0,681
0,621
0,790
0,763
0,836
0,824
0,789</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>
        This study presents the development and validation of a 40-item assessment scale to provide
academics with an instrument for evaluating users’ critical skills in using AI and its fundamental
constructs (i.e., knowledge-related, operational, critical, and ethical). Through the creation and
validation of a new AI literacy scale, it sought to advance our understanding of AI literacy. The
proposed approach is rooted on the Calvani et al. [45] notion of digital literacy, which provided
the conceptual ground for the Cuomo et al. [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] AI Literacy framework. We carried out a scoping
assessment using DeVellis' recommendations to find suitable items (n=118) related to AI literacy,
had the item pools updated by the experts (n=60), and then used EFA and CFA to show the
questionnaire's reliability (α= 0.95, AVE= 0.53).
      </p>
      <p>
        The theoretical model, based on four separate constructs as suggested by the adopted framework
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], resulted the most suitable conceptualization model for AI literacy, according to the findings
of the factor analysis. The other analyses, such as CR (0,94), also suggested good constructs’
validity. When putting the questionnaire to use in practice, there are a few things noteworthy.
The first is that the instrument as a whole is more trustworthy than the constructs alone. The
instrument's score was higher than 0.95, even though all four constructs showed reliability
coefficients of greater than 0.70. Therefore, rather than using the separate constructs, it is advised
to use the instrument as a whole, corresponding to the multidimensionality of AI literacy.
Furthermore, we intend to advance and promote future research in this field by defining the AI
literacy domain and offering useful measurement tools, by conceptualizing AI literacy and
creating appropriate methods for evaluating it. This way designers will be better able to portray
realistic user models and, subsequently, constructs able to explain AI systems based on these
models.
      </p>
      <p>
        In the landscape of questionnaires aimed at evaluating AI literacy, the novelty and strength of our
questionnaire relies on its comprehensive approach to the multidimensional nature of AI literacy.
While existing scales [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13,14,15</xref>
        ], primarily target specific or isolated aspects of AI such as emotive
or collaborative dimensions or were developed for evaluating AI literacy after a course [
        <xref ref-type="bibr" rid="ref12 ref7">7,12</xref>
        ],
our questionnaire rigorously acknowledges and assesses the intrinsic complexity of AI literacy,
by embracing a multifaceted perspective and providing a more nuanced, holistic understanding
of individuals' comprehension, attitudes, and engagement with AI. This innovative focus not only
fills a critical gap in the existing literature but also offers new pathways for educators,
policymakers, and researchers to cultivate a more profound and integrative AI literacy across
various sectors and populations.
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Limitations</title>
      <p>It is important to emphasize that these conclusions cannot be applied uniformly given the
characteristics of the sample (i.e.., a convenience sample, therefore neither probabilistic nor
representative of the reference population). Furthermore, the sample was primarily drawn from
higher education. Representatives from other subpopulations, like secondary education, may
have slightly different perspectives on various aspects of AI literacy. Therefore, future studies
should examine the extent to which the item set is applicable to other fields. Furthermore, to
better understand the subject and promote the creation of conditions that are suitable for the
implementation of successful educational AI literacy paths, additional research in this field is
required.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>
        In conclusion, this study underscores the importance and urgency of AI literacy measurement
tools. In an era where AI is ubiquitous and integral to many aspects of our lives, the need for AI
literacy is no longer a prospective necessity, but a present one. By recognizing the multiplicity of
definitions and obstacles in the development of AI literacy, we developed an assessment tool
based on a multidimensional framework [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Grounded in the concept of digital literacy [45] and
embracing various aspects of AI literacy including knowledge, skills, attitudes, and behaviors, this
tool has been thoroughly validated, showing high reliability and construct validity. Our research
contributes to the ongoing academic discourse by proposing a theoretically and empirically
sound instrument for assessing AI literacy. We acknowledge that given the diverse definitions
and applications of AI literacy, the tool we've developed is by no means definitive, but instead
offers a robust starting point for educators, researchers, and policymakers.
      </p>
      <p>Future research must continue refining the conceptualization and measurement of AI literacy and
explore how this literacy impacts students' ability to engage with AI and the broader effects this
engagement has on society. The journey to widespread AI literacy is undoubtedly a complex one,
but it is a journey we must undertake with vigor and commitment if we are to equip the next
generation with the tools they need to navigate a world increasingly mediated by AI.
[25] European Commission, Shaping Europe’s digital future—European strategy for data, 2021.
[26] European Commission, High-Level Expert Group on Artificial Intelligence, 2018. Available
online at: https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai.
[27] European Commission, Pilot the Assessment List of the Ethics Guidelines for Trustworthy AI,
2019. Available online at:
https://ec.europa.eu/futurium/en/ethics-guidelinestrustworthy-ai/register-piloting-process-0.html.
[28] European Commission, On Artificial Intelligence - A European approach to excellence and
trust, Technical report, Brussels, 2020.
[29] European Commission, Joint Research Centre (JRC) &amp; Organisation for Economic
Cooperation and Development (OECD), AI watch, national strategies on artificial intelligence: a
European perspective, Publications Office of the European Union, 2021.
doi:10.2760/069178.
[30] Organisation for Economic Co-operation and Development (OECD), Bridging the digital
gender divide: Include, upskill, innovate, OECD Publishing, 2018a.
http://www.oecd.org/digital/bridging-the-digital-gender-divide.pdf.
[31] Organisation for Economic Co-operation and Development (OECD), Future of education and
skills 2030: Conceptual learning framework, OECD Publishing, 2018b.
https://www.oecd.org/education/2030/Education-and-AI-preparing-forthe-future-AIAttitudes-and-Values.pdf.
[32] Organisation for Economic Co-operation and Development (OECD), Recommendation of the
Council on Artificial Intelligence, OECD Publishing, 2019.
https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
[33] J. C. Anderson, &amp; D. W. Gerbing, Predicting the performance of measures in a confirmatory
factor analysis with a pretest assessment of their substantive validities, in: Journal of Applied
Psychology, vol. 76, no. 5, 1991, pp. 732-740.
[34] T. R. Hinkin, A brief tutorial on the development of measures for use in survey
questionnaires, in: Organizational Research Methods, vol. 1, no. 1, 1998, pp. 104-121.
[35] C. A. Schriesheim, K. J. Powers, T. A. Scandura, C. C. Gardiner, &amp; M. J. Lankau, Improving
construct measurement in management research: Comments and a quantitative approach
for assessing the theoretical content adequacy of paper-and-pencil survey-type instruments,
in: Journal of Management, vol. 19, no. 2, 1993, pp. 385-417.R Core Team. (2018). R: A
language and environment for statistical computing [Computer software]
https://cran.rproject.org/.
[36] Jamovi Project, jamovi (Version 0.9) [Computer Software], 2019. Disponibile su:
https://www.jamovi.org/.
[37] L. J. Cronbach, Coefficient alpha and the internal structure of tests, in: Psychometrika, vol. 16,
1951, pp. 297–334.
[38] J. Nunnally, Psychometric Theory, New York: McGraw-Hill, 1978.
[39] R. P. McDonald, Test theory: A unified treatment, Mahwah, N.J.: L. Erlbaum Associates, 1999.
[40] C. Fornell, &amp; D. F. Larcker, Evaluating Structural Equation Models with Unobservable
Variables and Measurement Error, in: Journal of Marketing Research, vol. 18, no. 1, 1981, pp.
39–50.
[41] J. E. Hair Jr, R. E. Anderson, R. L. Tatham, &amp; W. C. Black, Multivariate data analysis (5th ed.),</p>
      <p>Upper Saddle River, NJ: Prentice-Hall, 1998.
[42] J. Hair, W. C. Black, B. J. Babin, &amp; R. E. Anderson, Multivariate data analysis (7th ed.), Upper</p>
      <p>Saddle River, NJ: Pearson Education International, 2010.
[43] D. W. Straub, Validating instruments in MIS research, in: MIS Quarterly, vol. 13, no. 2, 1989,
pp. 147–169.
[44] A. Calvani, A. Cartelli, A. Fini, &amp; M. Ranieri, Models and Instruments for Assessing Digital
Competence at School, in: Journal of E-Learning and Knowledge Society, vol. 4, no. 3, 2008,
pp. 183–193.
[45] United Nations Children’s Fund, Policy guidance on AI for children Draft 1.0, 2020.</p>
      <p>Disponibile su:
https://www.unicef.org/globalinsight/media/1171/file/UNICEF-GlobalInsight-policy-guidance-AI-children-draft-1.0-2020.pdf.
[46] United Nations Children’s Fund, AI policy guidance: How the world responded, 2021a.</p>
      <p>Disponibile su:
https://www.unicef.org/globalinsight/stories/ai-policy-guidance-howworld-responded.
[47] United Nations Children’s Fund, Policy guidance on AI for children 2.0, UNICEF, 2021b.</p>
      <p>Disponibile su:
https://www.unicef.org/globalinsight/media/2356/file/UNICEF-GlobalInsight-policy-guidance-AI-children-2.0-2021.pdf.
[48] United Nations Educational, Scientific and Cultural Organization (UNESCO), Beijing
consensus on artificial intelligence and education, 2019a. Disponibile su:
https://unesdoc.unesco.org/ark:/48223/pf0000368303.
[49] United Nations Educational, Scientific and Cultural Organization (UNESCO), Stepping up AI
for social good, 2019b.
[50] United Nations Educational, Scientific and Cultural Organization, AI and education: Guidance
for policy makers, 2021. Disponibile su:
https://unesdoc.unesco.org/ark:/48223/pf0000376709.
[51] B. Wang, P. L. P. Rau, &amp; T. Yuan, In Measuring user competence in using artificial intelligence:
Validity and reliability of artificial intelligence literacy scale, in: Behaviour and Information
Technology, 2022. doi:10.1080/0144929X.2022.2072768.
[52] L. J. Cronbach, Coefficient alpha and the internal structure of tests, in: Psychometrika, vol. 16,
1951, pp. 297–334.
[53] J. Nunnally, Psychometric Theory, New York: McGraw-Hill, 1978.
[54] R. P. McDonald, Test theory: A unified treatment, Mahwah, N.J.: L. Erlbaum Associates, 1999.
[55] C. Fornell, &amp; D. F. Larcker, Evaluating Structural Equation Models with Unobservable
Variables and Measurement Error, in: Journal of Marketing Research, vol. 18, no. 1, 1981, pp.
39–50.
[56] J. E. Hair Jr, R. E. Anderson, R. L. Tatham, &amp; W. C. Black, Multivariate data analysis (5th ed.),</p>
      <p>Upper Saddle River, NJ: Prentice-Hall, 1998.
[57] J. Hair, W. C. Black, B. J. Babin, &amp; R. E. Anderson, Multivariate data analysis (7th ed.), Upper</p>
      <p>Saddle River, NJ: Pearson Education International, 2010.
[58] D. W. Straub, Validating instruments in MIS research, in: MIS Quarterly, vol. 13, no. 2, 1989,
pp. 147–169.
[59] A. Calvani, A. Cartelli, A. Fini, &amp; M. Ranieri, Models and Instruments for Assessing Digital
Competence at School, in: Journal of E-Learning and Knowledge Society, vol. 4, no. 3, 2008,
pp. 183–193.
[60] United Nations Children’s Fund, Policy guidance on AI for children Draft 1.0, 2020.</p>
      <p>Disponibile su:
https://www.unicef.org/globalinsight/media/1171/file/UNICEF-GlobalInsight-policy-guidance-AI-children-draft-1.0-2020.pdf.
[61] United Nations Children’s Fund, AI policy guidance: How the world responded, 2021a.</p>
      <p>Disponibile su:
https://www.unicef.org/globalinsight/stories/ai-policy-guidance-howworld-responded.
[62] United Nations Children’s Fund, Policy guidance on AI for children 2.0, UNICEF, 2021b.</p>
      <p>Disponibile su:
https://www.unicef.org/globalinsight/media/2356/file/UNICEF-GlobalInsight-policy-guidance-AI-children-2.0-2021.pdf.
[63] United Nations Educational, Scientific and Cultural Organization (UNESCO), Beijing
consensus on artificial intelligence and education, 2019a. Disponibile su:
https://unesdoc.unesco.org/ark:/48223/pf0000368303.
[64] United Nations Educational, Scientific and Cultural Organization (UNESCO), Stepping up AI
for social good, 2019b.
[65] United Nations Educational, Scientific and Cultural Organization, AI and education: Guidance
for policy makers, 2021. Disponibile su:
https://unesdoc.unesco.org/ark:/48223/pf0000376709.
[66] B. Wang, P. L. P. Rau, &amp; T. Yuan, In Measuring user competence in using artificial intelligence:
Validity and reliability of artificial intelligence literacy scale, in: Behaviour and Information
Technology, 2022. doi:10.1080/0144929X.2022.2072768.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Laupichler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aster</surname>
          </string-name>
          , &amp; T. Raupach,
          <article-title>Delphi study for the development and preliminary validation of an item set for the assessment of non-experts' AI literacy</article-title>
          ,
          <source>in: Computers and Education: Artificial Intelligence</source>
          , vol.
          <volume>4</volume>
          ,
          <issue>2023</issue>
          , pp.
          <fpage>100126</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.caeai.
          <year>2023</year>
          .
          <volume>100126</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Southworth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Migliaccio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Glover</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Glover</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Reed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>McCarty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Brendemuhl</surname>
          </string-name>
          , &amp; A. Thomas,
          <article-title>Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy</article-title>
          ,
          <source>in: Computers and Education: Artificial Intelligence</source>
          , vol.
          <volume>4</volume>
          ,
          <issue>2023</issue>
          , pp.
          <fpage>100127</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.caeai.
          <year>2023</year>
          .
          <volume>100127</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kandlhofer</surname>
          </string-name>
          , G. Steinbauer,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hirschmugl-Gaisch</surname>
          </string-name>
          , &amp; P. Huber, Artificial Intelligence and Computer Science in Education: From Kindergarten to University, Paper presented at the
          <source>2016 IEEE Frontiers in Education Conference (FIE)</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Luckin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cukurova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kent</surname>
          </string-name>
          , &amp;
          <string-name>
            <given-names>B. Du</given-names>
            <surname>Boulay</surname>
          </string-name>
          ,
          <article-title>Empowering educators to be AI-ready</article-title>
          ,
          <source>in: Computers and Education: Artificial Intelligence</source>
          , vol.
          <volume>3</volume>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>100076</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.caeai.
          <year>2022</year>
          .
          <volume>100076</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. T. K.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. K. L. Leung</surname>
            ,
            <given-names>S. K. W.</given-names>
          </string-name>
          <string-name>
            <surname>Chu</surname>
          </string-name>
          , &amp;
          <string-name>
            <surname>M. S. Qiao</surname>
          </string-name>
          ,
          <article-title>Conceptualizing AI literacy: An exploratory review</article-title>
          ,
          <source>in: Computers and Education: Artificial Intelligence</source>
          , vol.
          <volume>2</volume>
          ,
          <issue>2021</issue>
          , pp.
          <fpage>100041</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.caeai.
          <year>2021</year>
          .
          <volume>100041</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J. W. K.</given-names>
            <surname>Ho</surname>
          </string-name>
          , &amp;
          <string-name>
            <surname>M. Scadding</surname>
          </string-name>
          ,
          <article-title>Classroom activities for teaching artificial intelligence to primary school students</article-title>
          , in S. C.
          <string-name>
            <surname>Kong</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Andone</surname>
            , G. Biswas, H. U. Hoppe,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Hsu</surname>
            ,
            <given-names>B. C.</given-names>
          </string-name>
          <string-name>
            <surname>Kuo</surname>
            ,
            <given-names>K. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Looi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Milrad</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Sheldon</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Shih</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Sin</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Song</surname>
          </string-name>
          , &amp; J.
          <string-name>
            <surname>Vahrenhold</surname>
          </string-name>
          (Eds.),
          <source>Proceedings of international conference on computational thinking education</source>
          <year>2019</year>
          , The Education University of Hong Kong,
          <year>2019</year>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>159</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Kong</surname>
          </string-name>
          , &amp; H.
          <string-name>
            <surname>Abelson</surname>
          </string-name>
          (Eds.),
          <article-title>Computational thinking education in K-12: Artificial intelligence literacy and physical computing</article-title>
          , MIT Press,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.-C.</given-names>
            <surname>Kong</surname>
          </string-name>
          , &amp; G. Zhang,
          <source>Evaluating an Artificial Intelligence Literacy</source>
          Programme for Developing University Students' Conceptual Understanding, Literacy, Empowerment and
          <string-name>
            <given-names>Ethical</given-names>
            <surname>Awareness</surname>
          </string-name>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Long</surname>
          </string-name>
          , &amp; B.
          <string-name>
            <surname>Magerko</surname>
          </string-name>
          , What is AI Literacy?
          <article-title>Competencies and Design Considerations</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1145/3313831.3376727.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Cuomo</surname>
          </string-name>
          , G. Biagini, &amp;
          <string-name>
            <surname>M. Ranieri</surname>
          </string-name>
          ,
          <source>Artificial Intelligence Literacy</source>
          ,
          <article-title>che cos'è e come promuoverla. Dall'analisi della letteratura ad una proposta di Framework</article-title>
          , in: Media Education,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .36253/me-13374.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Dai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. S.</given-names>
            <surname>Chai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S. Y.</given-names>
            <surname>Jong</surname>
          </string-name>
          , Y. Guo, &amp; J.
          <string-name>
            <surname>Qin</surname>
          </string-name>
          ,
          <article-title>Promoting students' well-being by developing their readiness for the artificial intelligence age</article-title>
          ,
          <source>in: Sustainability</source>
          , vol.
          <volume>12</volume>
          , no.
          <issue>16</issue>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .3390/su12166597.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.-C.</given-names>
            <surname>Kong</surname>
          </string-name>
          , W. M.-
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cheung</surname>
          </string-name>
          , &amp; O.
          <string-name>
            <surname>Tsang</surname>
          </string-name>
          ,
          <article-title>Evaluating an artificial intelligence literacy programme for empowering and developing concepts, literacy and ethical awareness in senior secondary students</article-title>
          ,
          <source>in: Education and Information Technologies</source>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1007/s10639-022-11408-7.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Sindermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wernicke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. S.</given-names>
            <surname>Schmitt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          , &amp;
          <string-name>
            <surname>C. Montag</surname>
          </string-name>
          ,
          <article-title>Assessing the attitude towards artificial intelligence: Introduction of a short measure in German, Chinese, and English Language, in:</article-title>
          <string-name>
            <surname>KI-Künstliche</surname>
            <given-names>Intelligenz</given-names>
          </string-name>
          , vol.
          <volume>35</volume>
          , no.
          <issue>1</issue>
          ,
          <issue>2021</issue>
          , pp.
          <fpage>109</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Schepman</surname>
          </string-name>
          , &amp; P. Rodway,
          <article-title>Initial validation of the general attitudes towards Artificial Intelligence Scale, in: Computers in Human Behavior Reports</article-title>
          , vol.
          <volume>1</volume>
          ,
          <issue>2020</issue>
          , pp.
          <fpage>100014</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.chbr.
          <year>2020</year>
          .
          <volume>100014</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y. Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          , &amp;
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior, in: Interactive Learning Environments</article-title>
          , vol.
          <volume>30</volume>
          , no.
          <issue>4</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>619</fpage>
          -
          <lpage>634</lpage>
          . doi:
          <volume>10</volume>
          .1080/10494820.
          <year>2019</year>
          .
          <volume>1674887</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16] [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Druga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. T.</given-names>
            <surname>Vu</surname>
          </string-name>
          , E. Likhith, &amp; T. Qiu,
          <article-title>Inclusive AI literacy for kids around the world</article-title>
          ,
          <source>in: Proceedings of FabLearn</source>
          <year>2019</year>
          ,
          <year>2019</year>
          , pp.
          <fpage>104</fpage>
          -
          <lpage>111</lpage>
          . doi:
          <volume>10</volume>
          .1145/3311890.3311904.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          , &amp; H.
          <string-name>
            <surname>Kim</surname>
          </string-name>
          , Why and What to Teach:
          <source>AI Curriculum for Elementary School</source>
          ,
          <year>2021</year>
          , p.
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhong</surname>
          </string-name>
          , &amp;
          <string-name>
            <surname>D. T. K. Ng</surname>
          </string-name>
          ,
          <article-title>A meta-review of literature on educational approaches for teaching AI at the K-12 levels in the Asia-Pacific region</article-title>
          ,
          <source>in: Computers and Education: Artificial Intelligence</source>
          , vol.
          <volume>3</volume>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>100065</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.caeai.
          <year>2022</year>
          .
          <volume>100065</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>A.</given-names>
            <surname>Aithal &amp; P. S. Aithal</surname>
          </string-name>
          ,
          <article-title>Development and Validation of Survey Questionnaire &amp; Experimental Data - A Systematical Review-based Statistical Approach</article-title>
          , in:
          <source>International Journal of Management, Technology, and Social Sciences (IJMTS)</source>
          , vol.
          <volume>5</volume>
          , no.
          <issue>2</issue>
          ,
          <issue>2020</issue>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>251</lpage>
          . DOI: http://doi.org/10.5281/zenodo.4179499.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20] [20]
          <string-name>
            <surname>R. F. DeVellis</surname>
          </string-name>
          ,
          <source>Scale Development: Theory and Applications</source>
          , Vol.
          <volume>21</volume>
          ,
          <string-name>
            <surname>Sage</surname>
            <given-names>publications</given-names>
          </string-name>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cowls</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Beltrametti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chatila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chazerand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dignum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Luetge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Madelin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Pagallo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Schafer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Valcke</surname>
          </string-name>
          , &amp; E. Vayena,
          <article-title>AI4People-An Ethical Framework for a Good AI Society</article-title>
          : Opportunities, Risks, Principles, and Recommendations,
          <source>in: Minds and Machines</source>
          , vol.
          <volume>28</volume>
          , no.
          <issue>4</issue>
          ,
          <issue>2018</issue>
          , pp.
          <fpage>689</fpage>
          -
          <lpage>707</lpage>
          . http://dx.doi.org/10.1007/s11023-018-9482-5.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <article-title>Introduction-The Importance of an Ethics-First Approach to the Development of AI</article-title>
          , in: Ethics, Governance, and
          <source>Policies in Artificial Intelligence</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>N.</given-names>
            <surname>Selwyn</surname>
          </string-name>
          ,
          <article-title>The future of AI and education: Some cautionary notes</article-title>
          ,
          <source>in: European Journal of Education</source>
          , vol.
          <volume>57</volume>
          , no.
          <issue>4</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>620</fpage>
          -
          <lpage>631</lpage>
          . doi:
          <volume>10</volume>
          .1111/ejed.12532.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          .
          <source>Joint Research Centre (JRC)</source>
          ,
          <article-title>The impact of Artificial Intelligence on learning, teaching, and education</article-title>
          ,
          <source>Publications Office</source>
          ,
          <year>2018</year>
          . https://data.europa.eu/doi/10.2760/12297.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>