<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Engaging Teachers in Co-Designing Examinations for Secondary Schools in the Era of Large Language Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Erik Winerö</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marie Utterberg Modén</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Gothenburg, Department of Applied IT</institution>
          ,
          <addr-line>Gothenburg</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This study investigates the role of large language models (LLMs) in co-designing examination tasks for secondary school teachers. Using end-user development (EUD) principles, we explore how teachers can design examinations that either restrict or incorporate generative AI. Through a series of workshops with 153 teachers from 7 schools, we gathered qualitative data on their current assessment practices and their interactions with AI. Our findings reveal diverse strategies for integrating AI in educational assessments, highlighting both opportunities and challenges. This research underscores the importance of professional development to enhance teachers' AI literacy and improve the congruence between their technological frames and pedagogical practices.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Large language models</kwd>
        <kwd>co-design</kwd>
        <kwd>educational assessment</kwd>
        <kwd>examinations</kwd>
        <kwd>secondary education</kwd>
        <kwd>AI literacy</kwd>
        <kwd>end-user development</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        There is a growing interest among researchers in the field of educational technology to employ
participatory design methods that allow for the empowerment of teachers through genuine
engagement with a relevant problem [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Throughout its history, participatory design has
actively engaged in processes that aim to foster new insights, skills, visions, and democratic
awareness among individuals by involving them in design and technology initiatives. Early
participatory design projects were guided by these political commitments and aimed to
empower future users to actively participate in technological development [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        However, Pérez-Sanagustín et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] found in their literature review of research on
technology in education that studies concerned with design of digital solutions often lack
involvement and mutual meaningful collaboration with teachers. At present, AI and large
language models (LLMs) have been described as the most prominent and controversial
technologies to impact education [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Introducing AI in classrooms becomes challenging when
'available tools and curriculum are incompatible with values and contexts' of the people who
use them [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        By incorporating end-user development (EUD) principles, we investigate how teachers
navigate and strategize around the use of generative AI in their assessment practices, influenced
by their own interpretations and understandings of the technology. While traditional EUD often
involves control over design [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], in educational contexts, teachers typically have limited
influence over the design of the technology they use. Therefore, we include the concept of EUD
to encompass the ways teachers adapt their practices and create new workflows around existing
technologies, particularly AI and LLMs. Thus, EUD involves teachers developing novel
approaches to use existing technologies, customizing their pedagogical practices, and creating
new assessment methodologies that either incorporate or exclude AI use.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Large language models in education</title>
      <p>
        In May 2022, Sharples [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] published a blog post titled “New AI tools that can write student
essays require educators to rethink teaching and assessment”. The impetus for Sharples’ post
was the rapid advancement in the field of generative AI and Large Language Models that began
to prominently emerge at the start of the decade. Notably, OpenAI's GPT-3, launched in 2020,
showcased an unprecedented ability to produce text virtually indistinguishable from that
written by humans. Sharples' contribution stands as an early instance where an educator
underscored the impact of the emerging LLMs on traditional assessment methods. Since that
time, a growing chorus of voices has resonated with similar concerns, prompting national and
global educational institutions and bodies like UNESCO [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] to formulate guidelines addressing
these challenges.
      </p>
      <p>
        In educational settings, various forms of written examinations have long served as an
essential method for assessment. The inherent reason being that declarative knowledge is
conveyed through language, and written language has historically been intuitive to use from
an assessment standpoint due to its physical manifestation. Spoken word is by its nature
temporary and context-dependent [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This has made written examinations a practice deeply
intertwined with the use of tools, ranging from traditional pen and paper to modern word
processors. In recent decades, digital applications like grammar checks and spell checks have
assisted students in refining their texts. Generally, these tools are perceived as means to polish
the surface of a text without adversely affecting its ability to serve as a valid representation of
the student's thoughts and learning. However, the recent rapid development and increased
access to LLMs and systems such as ChatGPT, Google Gemini and Claude has come to challenge
these notions.
      </p>
      <p>
        In discussions about the role of technology in society, two competing perspectives often
emerge. The first, rooted in technodeterminism, posits that it is the technology itself that drives
change, shaping educational practices and outcomes almost independently of the users'
intentions or understanding. An often-mentioned example is the expectations on transforming
teaching and learning by introducing computers in schools. The second perspective emphasizes
the role of users, arguing that the impact of technology is mediated by how users perceive,
interpret, and integrate technology into their practices. This latter view challenges the notion
of technology as an autonomous force, instead highlighting the importance of socio-cognitive
factors in shaping its implementation. Our study aligns with this second perspective, focusing
on how teachers' perceptions and interpretations of technology influence its integration into
educational practices. We apply Orlikowski and Gash's concept of technological frames [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
extending it with Orlikowski’s notion of sociomateriality [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], to explore how teachers'
assumptions, expectations and knowledge about generative AI and LLMs influence and affects
teachers’ assessment practices.
      </p>
      <p>From this perspective, teachers' technological frames become foundational to their
reasoning, design process, and ultimately, the assessment methods they employ. By utilizing
this theoretical assumption as a lens, we aim to elucidate the connection between teachers'
assumptions, expectations, and knowledge, and how these shape their pedagogical practices.
Specifically, we examine how teachers' technological frames regarding generative AI and LLMs
influence their development of summative assessment methods. Ultimately, our goal is to
contribute to a deeper understanding of the complex interplay between pedagogy and
technology. Through this, we hope to empower teachers to develop assessment methods that
both incorporate and exclude AI in a conscious, well-reasoned, and sustainable manner.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Method</title>
      <p>The study employed a qualitative approach, conducting workshops with teachers where they
were tasked with designing two types of assessment tasks: one that restricts AI use and one
that integrates AI. Data were collected through observation and analysis of the tasks during 11
workshops and a total of 153 teachers from 7 schools. The first workshop took place at the
beginning of March, and the last one in early June 2023.</p>
      <p>Our research was designed to serve a dual purpose: to collect data relevant to our study and
to provide professional development for the teachers involved. This approach was in response
to the large number of inquiries we received from schools regarding professional development
in generative AI during the winter and spring of 2023.</p>
      <p>While this study focuses on teachers in Sweden, the sample size provides a robust basis for
qualitative analysis. The participants represent a diverse group of educators across various
subjects and experience levels, enhancing the study's internal validity. Moreover, the challenges
and opportunities presented by generative AI in education are largely universal, transcending
national boundaries. The rapid global adoption of AI technologies in education suggests that
many of the insights gained from this study may be applicable to international contexts.
However, we acknowledge that cultural, policy, and infrastructural differences may influence
the specific ways in which AI is integrated into educational practices across different countries.
Future research could explore these potential variations in more depth.</p>
      <p>Each workshop commenced with an overview of generative AI and LLMs, aimed at
equipping participants with a common understanding without swaying their perspectives. We
deliberately avoided in-depth discussions about how LLMs work, such as their foundation in
statistical models or the uniqueness and unpredictability of their generated texts. Our
introduction was limited to a brief historical overview and a demonstration of ChatGPT,
ensuring it was a primer rather than a detailed lecture. The participating teachers were then
individually asked to document their most recent and frequently used assessment practices.
Following this, teachers were grouped randomly into small groups to share and discuss their
summative assessment practices in relation to recent development in LLMs. After this, the
teachers were randomly grouped into pairs or trios with the task of designing two separate
variants of examinations. One variant was to be designed in such a manner that generative AI
could not be used or would not be beneficial for students to use. The other variant was to
incorporate generative AI as a central component of the examination. The teachers were given
about 30 minutes to design these examinations, after which they presented them to the rest of
the group. These presentations were recorded and transcribed, serving as the main data for this
study.</p>
      <p>The teachers were also given an anonymous survey where they were asked to provide
information about the subjects they taught, whether they held a teaching certification, how
long they had been teaching, and whether they had received any specific professional
development in the field of AI, and to what extent they potentially had explored generative AI
independently outside of their formal work responsibilities.</p>
      <p>It is important to highlight that we did not categorize teachers based on the subjects they
taught. This decision was driven by our aim to foster a broader discourse and given that teachers
in Sweden frequently work in interdisciplinary teams (as was the case with all participating
teachers in this study), we chose to maintain the discussions within the study as subject
transcendent.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results and findings</title>
      <p>Out of 152 teachers, 137 responded to the survey, which represents a response rate of
approximately 90%. Among the respondents, 89 teachers indicated they taught at the middle
school level, while 48 reported teaching at the high school level. These educators taught across
a diverse range of subjects, with a notable emphasis on Humanities and Social Sciences
(HUMSS) and Science, Technology, Engineering, and Mathematics (STEM), in addition to
languages like Swedish and English, arts, and physical education. Their experience in the field
varied significantly, averaging nearly 18 years, with a span from 1 to 35 years. Of the middle
school teachers, 73 percent held teaching certifications, compared to the national figure of 71.5
percent (Swedish National Agency for Education, 2024). For high school teachers in the study,
the certification rate was 83 percent, against a national average of 84.2 percent. Regarding the
teachers’ previous experience of generative AI and LLMs the answers from the survey were
categorized into three categories: none, some, and extensive, with none of the responses being
considered as meeting the criteria for the latter (fig. 1). Responses categorized as 'some' typically
referred to professional development consisting of a short lecture or information session about
ChatGPT, and when it came to non-professional experience, it mostly involved teachers having
tested and explored generative AI services on their own on one or a few occasions (fig. 2).
83 %
None
43 %
None
In the task of designing an assessment that would exclude AI an overwhelming majority (93
%) of the teachers chose to employ various forms of traditional assessment situations, where
the environment and tools were controlled and restricted to limit students' access to generative
AI (fig. 3). This was typically achieved through a) traditional in-class exams by returning to
paper-and-pencil exams conducted in the classroom, b) digital assessment system by utilizing
digital platforms that limit internet access, thereby preventing the use of AI tools, and c) oral
examinations by conducting assessments orally to ensure that AI cannot be used during the
evaluation process. Just a smaller portion of the teachers (7%) chose to design tasks they
perceived to be inherently designed in such a way that students were not justified in using AI.
Examples of such tasks included assignments where students were asked to write more personal
texts, such as referring to their own experiences, or to write analyses of texts provided only at
the time of the exam (in these and other described assessment formats, it could be argued that
the teachers underestimated the capabilities of generative AI, which is why we refer to these as
(perceived) AI-resistant tasks.</p>
      <p>The overwhelming preference (93%) for traditional, controlled assessment environments to
limit AI use reveals a significant trend in teachers' approaches to excluding AI from
assessments. This reliance on established methods suggests that teachers may feel more
confident in their ability to control the assessment environment rather than in designing
inherently 'AI-proof' tasks.</p>
      <p>The small portion (7%) of teachers who attempted to design tasks they perceived as
inherently resistant to AI use presents an intriguing area for further exploration. These
attempts, which included assignments focusing on personal experiences or real-time analysis
of provided texts, highlight the creative approaches some educators are taking to address the
challenges posed by AI. However, the limited adoption of such strategies also underscores the
difficulty in designing truly 'AI-resistant' tasks in an era of rapidly advancing language models.</p>
      <p>This stark contrast between traditional control methods and attempts at AI-resistant task
design provides valuable insights into teachers' current comfort levels and perceived
capabilities in navigating the AI landscape in education. It suggests a need for professional
development not only in AI integration but also in designing authentic assessments that
maintain their validity in an AI-rich environment without necessarily reverting to traditional,
controlled testing situations.</p>
      <p>100
90
80
70
60
50
40
30
20
10
0
93 %
7 %</p>
      <p>Controlling the Assessment Environment Designing (perceived) AI-Resistant Tasks
When it came to the task of designing assessments where AI would be integrated, a majority
(86%) chose various approaches where students would analyze AI-generated material, such as
analyzing (and in some cases even detecting!) AI-generated texts. A smaller portion (14%) chose
tasks where students would in various ways build upon AI-generated material. This could
involve skills allowing AI to generate basic material so that students could focus on more
complex aspects of the task. For example, AI might generate the foundations of a song or a piece
of writing, which students then refine and develop further. Using AI for language practice:
Employing AI bots for conversational practice to enhance language skills. Fairer oral
assessments: Generating scripts with AI for oral presentations, thereby assessing only the
delivery and not the scriptwriting. This method aimed to create a more equitable assessment
environment.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>Analysis of the teachers' presentations revealed two primary perspectives on how AI can be
incorporated into assessment practices. These perspectives can be illustrated through a Venn
diagram (Fig. 5). The dominant approach among teachers was to view AI as a learning objective.
In this context, teachers designed tasks where students were required to reflect on and analyse
AI-generated content. A less prominent, yet significant perspective was the view of AI as a tool.
Here, teachers provided fewer concrete examples, potentially indicating limited personal
experience in working with AI. One of the few examples mentioned was allowing students to
use AI to generate scripts for oral presentations, which would enable an assessment focused on
delivery rather than script writing.</p>
      <p>
        This discrepancy between the two perspectives can be understood through the concept of
'technological frames' [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which highlights how teachers' own assumptions, expectations, and
knowledge about AI influence their conceptions of its role in teaching and assessment. The
Venn diagram serves as a visual representation of these two dimensions - AI as a learning
objective and AI as a tool - and their potential overlap in assessment. It is worth noting that this
conceptualization applies only to the tasks where teachers were asked to include AI, not to
those where AI was to be excluded.
      </p>
      <sec id="sec-5-1">
        <title>Assessment</title>
        <p>AI as a
learning
objective</p>
      </sec>
      <sec id="sec-5-2">
        <title>AI as a tool</title>
        <p>This Venn diagram illustrates the complex interplay between Assessment, AI as a learning
objective, and AI as a tool in the context of our study. Each circle represents a key area:</p>
        <p>Assessment (Red circle): Traditional and evolving methods of evaluating student learning
and performance.</p>
        <p>AI as a learning objective (Purple circle): Teaching students about AI, including its
capabilities, limitations, and societal impacts.</p>
        <p>AI as a tool (Green circle): Using AI technologies to support or enhance learning and
teaching processes.</p>
        <p>The intersections of these circles represent areas where teachers are engaging in complex
forms of end-user development (EUD):</p>
        <p>Assessment + AI as a learning objective: Here, teachers design assessments that test students'
understanding of AI concepts, such as the tasks where students analyse AI-generated texts.</p>
        <p>Assessment + AI as a tool: This intersection involves using AI to support assessment
processes, as seen in the tasks where AI generates basic material for students to build upon.</p>
        <p>AI as a learning objective + AI as a tool: This area combines teaching about AI with practical
use of AI tools, reflected in tasks where students both use and critically analyse AI.</p>
        <p>The central intersection represents the most integrated approach, where AI is
simultaneously a subject of study, a tool for learning, and part of the assessment process.</p>
        <p>While the Venn diagram (Fig. 5) serves as a useful heuristic tool for visualizing the different
perspectives on AI integration in assessment practices, it is important to acknowledge its
limitations. As with any model, it necessarily simplifies the complex reality of teachers' attitudes
and practices. The clear-cut categories and intersections may not fully capture the nuanced and
sometimes conflicting views that individual teachers may hold. Despite these limitations, the
Venn diagram provides insights by offering a clear, visual representation of the main themes
that emerged from our analysis. It helps to conceptualize the different ways teachers approach
AI in assessment and highlights potential areas of integration. Future research could build upon
this model, perhaps developing more sophisticated representations that capture additional
dimensions of teachers' perspectives and practices.</p>
        <p>Having acknowledged both the utility and limitations of the Venn diagram as an analytical
tool, we can now delve deeper into interpreting the patterns it helps us visualize. One
particularly striking observation is the teachers' predominant focus on AI as a learning
objective. We interpret this focus, for both teachers and students, as indicative of a broader
phenomenon. Specifically, we see it as a sign that teachers themselves are not yet sufficiently
knowledgeable or comfortable with using AI in their practice. The tasks they assign reflect their
own knowledge and limitations. In other words, the assignments and the skills they assess are
influenced by their own understanding, which argues against a technodeterministic perspective
and supports a socio-cognitive perspective.</p>
        <p>While our findings suggest a connection between teachers' limited experience with AI and
their tendency to focus on AI as a learning objective rather than a tool, it's important to
recognize that this relationship is likely more complex than a simple cause-and-effect scenario.
Various factors could influence teachers' approaches, including but not limited to their prior
technological experiences, pedagogical beliefs, institutional policies, and the specific subject
areas they teach. Moreover, the correlation we observe between limited AI experience and a
focus on AI as a learning objective might be bidirectional. Teachers with less experience might
naturally gravitate towards teaching about AI rather than with it, but equally, a curriculum
emphasis on AI literacy could lead to teachers spending more time learning about AI than
experimenting with it as a tool.</p>
        <p>Future research could benefit from a more granular analysis of these factors, possibly
employing mixed methods to quantify the strength of various influences on teachers' AI
integration strategies. This could help disambiguate correlation from causation and provide a
more comprehensive understanding of how teachers' technological frames evolve in relation to
their practical experiences with AI.</p>
        <p>Given this complex interplay of factors influencing teachers' approaches to AI integration,
it's crucial to consider how these observations align with broader theoretical frameworks.
Particularly relevant is the concept of End-User Development (EUD) in educational contexts.
This interpretation, acknowledging both the predominant focus on AI as a learning objective
and the multifaceted influences shaping teachers' practices, aligns with EUD principles. In this
context, teachers are developing new understandings and practices around AI rather than
modifying the technology itself. The process of designing AI-integrated and AI-excluded
assessments can be seen as a form of practice-oriented customization, where teachers are
essentially creating new 'programs' of practice. This form of EUD is particularly relevant in
educational contexts where direct technological modification is often not feasible.</p>
        <p>The collaborative nature of the workshops also highlights the potential for
communitydriven EUD in education. By sharing and refining their approaches to AI in assessment, teachers
engaged in a collective form of end-user development, creating shared knowledge and
methodologies that can be adapted to various educational contexts.</p>
        <p>
          The Venn diagram helps us visualize how teachers are navigating between different aspects
of AI integration in education. As they design assessments, they're moving between these
different areas, making decisions about how to balance assessment needs, AI education, and AI
integration, where teachers are shifting from the ‘fast thinking’ operations typically provided
by generative AI to the ‘slow thinking’ aspect of EUD, which involves critically verifying and
going beyond the information presented [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. This navigation process itself is a form of EUD, as
teachers are developing new practices and understandings in a complex, evolving educational
landscape.
        </p>
        <p>Moreover, the diagram illustrates how teachers' technological frames (their understanding
and perceptions of AI) influence their EUD activities across these areas. Teachers with different
levels of AI literacy or different views on AI's role in education might focus their EUD efforts
in different areas of the diagram. For instance, teachers who are more comfortable with AI
might design assessments that fall in the central intersection, integrating all three aspects, while
those less familiar with AI might focus more on the "Assessment" circle or the intersection of
"Assessment" and "AI as a learning objective".</p>
        <p>It is important to note that this study represents a snapshot of teachers' technological frames
and practices during a specific period (March to June 2023). Given the rapid evolution of AI
technologies, it is likely that these frames have since evolved. However, this temporal specificity
does not diminish the study's value. Rather, it underscores the dynamic nature of technological
frames and their impact on educational practices. The key insight lies not in the specific content
of the frames at that time, but in demonstrating how these frames shape teachers' approaches
to AI integration in assessment. This relationship between frames and practice remains relevant
even as the technology and teachers' understanding of it continue to evolve. Future research
could benefit from longitudinal studies to track how these frames change over time and how
such changes influence pedagogical practices.</p>
        <p>Therefore, it is crucial to understand what teachers know, how they reason, and the
decisions and assessment methods that result from this, their technological frames.
Additionally, there is a need for teachers to receive ongoing professional development to create
better conditions for an assessment practice that effectively incorporates AI as a tool. Future
research and professional development initiatives could benefit from focusing on empowering
teachers to engage more deeply in this form of conceptual and practice-oriented EUD,
enhancing their ability to adapt and innovate in the face of rapidly evolving educational
technologies. This could involve supporting teachers in moving towards the central intersection
of the Venn diagram, where they can integrate AI as a learning objective, a tool, and a part of
the assessment process in balanced and innovative ways.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Bronwyn</given-names>
            <surname>Cumbo</surname>
          </string-name>
          and
          <string-name>
            <given-names>Neil</given-names>
            <surname>Selwyn</surname>
          </string-name>
          .
          <year>2022</year>
          .
          <article-title>Using participatory design approaches in educational research</article-title>
          .
          <source>International Journal of Research &amp; Method in Education 45</source>
          ,
          <issue>1</issue>
          :
          <fpage>60</fpage>
          -
          <lpage>72</lpage>
          . https://doi.org/10.1080/1743727X.
          <year>2021</year>
          .1902981
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Pelle</given-names>
            <surname>Ehn</surname>
          </string-name>
          .
          <year>1989</year>
          .
          <article-title>Work-oriented design of computer artifacts</article-title>
          . Lawrence Erlbaum, Hillsdale New Jersey.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Mar</given-names>
            <surname>Pérez-Sanagustín</surname>
          </string-name>
          , Miguel Nussbaum, Isabel Hilliger, Carlos Alario-Hoyos, Rachelle S. Heller,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Twining</surname>
          </string-name>
          , and
          <string-name>
            <surname>Chin-Chung Tsai</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Research on ICT in K-12 schools - A review of experimental and survey-based studies in computers &amp; education 2011 to 2015</article-title>
          .
          <source>Computers &amp; Education</source>
          <volume>104</volume>
          :
          <fpage>A1</fpage>
          -
          <lpage>A15</lpage>
          . https://doi.org/10.1016/j.compedu.
          <year>2016</year>
          .
          <volume>09</volume>
          .006
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Ben</given-names>
            <surname>Williamson</surname>
          </string-name>
          , Felicitas and
          <string-name>
            <given-names>John</given-names>
            <surname>Potter</surname>
          </string-name>
          .
          <year>2023</year>
          ).
          <article-title>Re-examining AI, automation and datafication in education. Learning, media</article-title>
          and technology,
          <volume>48</volume>
          ,
          <issue>1</issue>
          :
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Lin</given-names>
            <surname>Phoebe and Jessica Van Brummelen</surname>
          </string-name>
          .
          <year>2021</year>
          , May.
          <article-title>Engaging teachers to co-design integrated AI curriculum for K-12 classrooms</article-title>
          .
          <source>In Proceedings of the 2021 CHI conference on human factors in computing systems</source>
          ,
          <volume>1</volume>
          -
          <fpage>12</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Gerhard</given-names>
            <surname>Fischer</surname>
          </string-name>
          .
          <year>2023</year>
          .
          <article-title>Adaptive and adaptable systems: Differentiating and integrating AI and</article-title>
          EUD in: Spano D. (ed.)
          <source>Proceedings of the 9th International Symposium on End-User Development</source>
          ,
          <fpage>3</fpage>
          -
          <lpage>18</lpage>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Mike</given-names>
            <surname>Sharples</surname>
          </string-name>
          .
          <year>2022</year>
          .
          <article-title>Blog post: New AI tools capable of writing student essays compel educators to reconsider approaches to teaching and assessment</article-title>
          . https://blogs.lse.ac.uk/impactofsocialsciences/2022/05/17/new-ai
          <article-title>-tools-that-can-writestudent-essays-require-educators-to-rethink-teaching-and-assessment/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>UNESCO.</surname>
          </string-name>
          <year>2023</year>
          .
          <article-title>Guidance for generative AI in education and research</article-title>
          . UNESCO
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Walter</surname>
            <given-names>J</given-names>
          </string-name>
          <string-name>
            <surname>Ong</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>Orality and literacy the technologizing of the word (2</article-title>
          . ed.). London: Routledge.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Wanda J Orlikowski and Debra C Gash</surname>
          </string-name>
          .
          <year>1994</year>
          .
          <article-title>Technological frames: making sense of information technology in organizations</article-title>
          .
          <source>ACM Transactions on Information Systems (TOIS)</source>
          ,
          <volume>12</volume>
          , 2:
          <fpage>174</fpage>
          -
          <lpage>207</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Wanda</surname>
            <given-names>J</given-names>
          </string-name>
          <string-name>
            <surname>Orlikowski</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Sociomaterial practices: Exploring technology at work</article-title>
          .
          <source>Organization studies. 28</source>
          ,
          <issue>9</issue>
          :
          <fpage>1435</fpage>
          -
          <lpage>1448</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Fergus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Botha</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ostovar</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Evaluating Academic Answers Generated Using ChatGPT</article-title>
          .
          <source>Journal of Chemical Education</source>
          ,
          <volume>100</volume>
          (
          <issue>4</issue>
          ),
          <fpage>1672</fpage>
          -
          <lpage>1675</lpage>
          . https://doi.org/10.1021/acs.jchemed.3c00087
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>